Skip to main content
Open AccessReview Article

A Social Psychological Toolbox for Clinical Psychology

Published Online:https://doi.org/10.1027/2151-2604/a000447

Abstract

Abstract. Translational science involves the fruitful interplay between basic research paradigms and related fields of application. One promising candidate for such synergy is the relationship between social and clinical psychology. Although the relation is principally bi-directional, such that either discipline can take the role of the basic and the applied science, we take the perspective of transfer from basic social and cognitive social psychology to applications in the clinical realm. Starting from a historical sketch of some of the earliest topics in the interface of both disciplines, we first come to conclude that truly integrative co-theorizing is conspicuously missing. Then, however, we recognize the strong potential for productive collaboration at the pragmatic level of an adaptive research toolbox containing approved methods and compact theoretical tools that carry over between disciplines. We outline the notion of a generic, provisional toolbox as distinguished from a fixed repertoire of established standard procedures. We provide examples of two subsets of tools, methods and logical principles required proper diagnostic reasoning, and theoretically founded influence tools that can enrich the repertoire of therapeutic interventions. Rather than propagating a normatively prescriptive toolbox, we interpret translational science as a pluralistic endeavor, such that different clinicians complete their personalized toolboxes in manifold ways.

The division of labor between social and clinical psychology, or more generally between fundamental science and applied research and practice, is characterized by respectful co-existence. In this article, we take a translational-science perspective, examining the conditions under which basic methods and theoretical concepts from social and social-cognitive psychology can enrich clinical research and practice psychology (Garb, 1994, 2010; Maddux & Tangney, 2010). Although the reverse perspective is equally interesting and challenging, it would require a completely different analysis. Going beyond Kurt Lewin’s (1947) famous statement that “there is nothing more practical than a good theory,” we argue that good social psychological theories and sound methods are not only highly practical and useful. They are central for all sound and responsible reasoning in diagnostics and effective therapeutic action.

Preview of the Present Article

However, fruitful collaboration between both fields is not visible at all levels. We first provide a brief historical sketch of three origins of early research, which attained cult status and promised to instigate joint research and integrative theorizing in social and clinical psychology. Thus, we refer to David Rosenhan’s (1973) classical article in Science on “being sane in insane places,” highlighting the notion of self-fulfilling prophecies and the power of the situation, which should be at least as visible in a psychiatric hospital environment as in an experimental laboratory in social psychology. We also present Scheff’s (1974) famous work on labeling and maladaptive attribution and an outline of George Kelly’s (1973) inspiring writings on “fixed role therapy” embedded in his highly double volume on the psychology of personal constructs. Yet, despite the enormous prominence and multiplication factor of these exciting and challenging ideas, they did not instigate truly integrative social-clinical work. These extremely thought-provoking phenomena did not generate novel theories and empirical insights beyond the existing evidence on self-fulfilling prophecies, attribution, and dramaturgic therapy. It seems fair to say that these early highlights remained ideally suited for feuilleton pages of newspapers or magazines but failed to inspire theoretical progress and empirical findings published in leading journals.

Despite the paucity of joint theorizing, however, we argue that the collaboration of social and clinical psychology is flourishing at another, more pragmatic level. It seems unrealistic indeed to expect distinct theories and unique findings to emerge from two research fields with a completely different agenda. Yet, the methods and knowledge modules developed in either field are nevertheless pre-adapted to offer a useful toolbox for the other field. The notion of a tool emphasizes the functional value of an instrument or idea, which is not restricted to the context in which it was originally developed but can help to solve problems in a variety of domains. This holds not only for a mechanic’s pliers or saw, but also for a scientist’s tools, including methods, logical rules, causal models, measurement devices, or empirical findings.

We will discuss two classes of tools that appear to be equally useful but serve different functions in different contexts. First, we will consider a set of basic methods and logical tools that are crucial and essential across scientific domains. Understanding and controlling these tools is particularly essential for sound diagnostic reasoning when clinicians engage in diagnosing inferences, risk assessment, prognostic judgment, ruling out alternative diagnoses, and evading manifold sources of error and bias in the diagnostic process.

Then, theoretical concepts and experimental models of cause-effect relations constitute the second class of tools to be exploited in the therapeutic process. Although these tools must be updated over time, as novel findings arise and old ones fade out, competence in such evidence-based models and empirical findings can greatly enrich the clinician’s therapeutic repertoire. Thus, in addition to the traditional causal underpinnings of the clinical psychology curriculum – like Pavlovian and evaluative conditioning, operant learning, stress mechanisms, stress versus relaxation, attribution, habituation, or flooding – a modern version of a toolbox is enriched with up-to-date concepts from social-cognitive and other areas of fundamental science. A toolbox may include, depending on the therapist’s treatment school and client’s problem, mediation techniques for pair consultation, priming techniques, nudging and boosting treatments of obesity, implementation intentions to prevent symptoms, or construal-level theory to foster self-control, to list but a few in advance.

Early Highlights in Social and Clinical Psychology

Fixed-Role Therapy

A prominent example, to start with, can be found in George Kelly’s (1955) seminal book on the psychology of personal constructs, with a chapter devoted to fixed-role therapy. This therapeutic intervention was motivated by Kelly’s over-arching theoretical message that all personal growth – whether in therapy, education, or intellectual problem solving – reflects a periodic interplay of two complementary adaptive functions, loosening and tightening. Analogous to random variation (loosening) and strict selection (tightening) in Darwinian evolution theory, Kelly depicted the “creative cycle” as a learning process that requires the learner to (a) find alternatives to crystalized behavior routines and then (b) to select the best alternative and implement it as the new routine.

In fixed-role therapy, the patient is instructed to play a social role for a restricted period of time, which can be expected to serve a loosening function. For instance, a patient with an insufficiency syndrome or partnership problems might be instructed to play the role of a social reinforcer, who provides rewarding compliments to all other people, even those with clearly higher social status or self-confidence. The patient knows it is only a role play, like an actor’s performance. The hypothesis is that the resulting social feedback echo can help patients to get rid of (to loosen) the crystalized belief that they are not socially significant enough. A new authentic social role can then be implemented (tightened) in the course of the therapeutic process.

On “Being Sane in Insane Places”

Another memorable example of an original compelling idea for clinical and social psychologists can be found in David Rosenhan’s (1973) work on self-fulfilling prophecies in institutional environments, illustrated in his famous article “on being sane in insane places.” Rosenhan described what can happen to normal (“sane”) people entering a hospital where they are treated as a psychiatric (“insane”) patient. They had virtually no chance to convince nurses and doctors of the opposite. The more they undertook to prove their “normality,” the more cynical and malicious treatment they got in return, and the more notoriously their behavior was re-interpreted as evidence for a mental disorder. In the end, the only way for patients out of this dilemma was that they started to believe in their pathological state – if only to give meaning and viable explanations to their inescapable situation.

Although, or exactly because, the investigative journalist Susannah Cahalan (2019) and several other renowned scientists (Ruscio, 2004) raised serious doubts concerning the veracity of Rosenhan’s (1973) report, this provocative paper, published in Science, and fully in the spirit of the self-fulfilling prophecy Zeitgeist, could have instigated a huge research program, with high publicity, and a low threshold for generous research funding. Nothing like that was realized, however.

Labeling

For one more example of forgone theory development, consider Thomas Scheff’s (1974) famous work on labeling processes in the etiology and the therapy of mental disorders. The power of verbal labels and attributions can be enormous (Martinez et al., 2011), even when labels consist of nothing but abstract nominalizations (Bolinger, 1973), detached from concrete behaviors and situations. It is difficult for an individual to disconfirm that he or she is a “stutterer,” a “coward,” an “asocial being,” or an “egoistic taker.” Given that patients cannot disconfirm diagnostic labels such as “depressive,” “schizophrenic,” “psychotic” or “suicidal,” they may adjust their self-image accordingly. Verbal labeling effects are not confined to overt speech acts (Abdullah & Brown, 2020). An abstract language style alone can contribute to the feeling of learned helplessness (Alloy et al., 1984), when abstract adjectives are used to provide people with feedback about their negative behavior, creating the impression that their unwanted attributes (e.g., “arrogant,” “socially incompetent,” “impotent”) are general, temporally stable, and hard to control voluntarily (Fincham & Bradbury, 1992).

Reasons for the Nil Return

Yet, all three prominent highlights did not instigate novel ground-breaking work. What appeared to be stellar moments in behavioral science did not fertilize novel theorizing or genuine interdisciplinary research programs. Despite the prominence of the labeling concept, its perfect fit to the attribution Zeitgeist of the seventies, and its status of a frequently cited topic in the following decades, no labeling boom inspired distinct theoretical innovations. The attribution era in social psychology faded out rather than fertilizing a new movement, comparable to linguistic relativity (Lucy, 2016) or gender talk. Clinical psychology, too, did not elaborate on labeling-based therapy programs. Rather, “labeling” remained a highlight for feuilleton pages and a concept cited in final discussions of countless publications.

In a similar vein, Rosenhan’s (1973) challenge on being sane in insane places did not instigate novel theorizing on causal origins and critical boundary conditions of self-fulfilling prophecies or confirmation biases that went beyond the established conceptions advocated by Rosenthal (2002) or Downey et al. (1998). Nor did it start a novel discourse in clinical psychology on the power of the situation or on environmental constraints on therapeutic processes. It rather became the target of a suspected faked report lacking scientific sincerity.

George Kelly’s (1955) seminal psychology of personal constructs, which could not have been more prominent at that time, also failed to start a new theoretical discourse or a clinical movement, on symbolic interactionism (Mead, 1934; Snyder, 1984) or the therapeutic impact of creative or self-generative processes, the effectiveness of which was praised in other applied areas (Bjork, 1994; Mulligan & Lozito, 2004). Kelly’s idea invited a synthesis of dissonance theory and self-perception theory, operant learning, improvisation, and mental simulation processes that could have been unfolded and refined in promising new research that could be exploited for therapeutic purposes, health education, political campaigns, and ecological education. Such a synthesis never took place, nor was there any attempt to elucidate and systematically explain the underpinnings of fixed-role therapy, apart from a few isolated applications in treating anxiety (Abe et al., 2011) or sexual disorders (Horley, 2006).

All these chances for cross-fertilization remained unexploited. The inertia of the mainstream research programs and core theories of either discipline prevented a stronger impact of such emergent topics as fixed-role therapy, labeling, and self-fulfilling prophecies. We believe this is a normal state of affairs in translational science or interdisciplinary work. Indeed, one could have hardly expected either discipline to change its theoretical underpinnings in the light of evidence from a neighboring field. However, we hasten to add that this somewhat disillusioning insight does not mean that no synergy between disciplines is at work. In the remainder of this paper, we try to elucidate what strenuous and productive collaboration is possible at another, more pragmatic level, namely the level of a versatile toolbox, the functional value of which is detached from disciplinary goals and constraints. Let us first outline the notion of a toolbox and its contents. Let us particularly discuss the importance of two classes of tools, methods, and influence tools, respectively, on diagnostic reasoning and therapeutic effectiveness.

From Theory Creation to the Shirtsleeve Pragmatics of a Toolbox

We believe that tool use allows for the actual exchange between scientific disciplines. Because a tool constitutes a multi-purpose instrument that is not confined to a single purpose, it can be shared between users and across purposes. Why should a tool developed in one domain (e.g., social psychology) be not applicable in clinical psychology, or other applied domains?

The contents of the toolbox outlined in this section are of course not meant to be exhaustive. On the contrary, its incomplete, improvised, and creative status is conceived as one of its major assets. Individual therapists or clinicians are invited to add supplementary tools according to their profile, and the resulting diversity of individual toolboxes makes translational science generative and pluralistic. For another disclaimer, discussing the entire contents of the toolbox in Tables 1 and 2 would exceed the scope of this article, which only allows us to discuss a couple of illustrative examples from both compartments, methods, and influence tools, at some length.

Table 1 Contents of a versatile basic-science toolbox shared by social and clinical psychologists (Part 1: Logical and methodological tools to improve diagnostic inferences)
Table 2 Contents of a versatile toolbox shared by social and clinical psychologists (Part 2: Causal principles to enrich the therapeutic repertoire)

Of utmost importance are logical and methodological tools that set science apart from other cultural domains. Professional expertise in science is primarily a matter of established methods and standardized procedures. Knowledge of human nature or the motive to help people is not a matter of scientific training. The major distinctive feature of a scientist, compared to journalists, nurses, and prosocial persons in other cultural domains is the ability to evaluate the validity and viability of psychological assumptions, to separate the wheat from the chaff. A clinical psychologist’s vocational sincerity and responsibility depend on his or her professional mastery of basic methods tools, which are particularly essential for diagnostic judgment. Table 1 refers to one of the two main compartments of the adaptive toolbox we are suggesting, which contains these fundamental methods tools.

Methods and Logical Tools for Sound Diagnostic Inferences

A necessary condition for valid and responsible scientific reasoning is to gain a firm understanding of the methodological tools in the left-most column of Table 1 (aside from others not listed herein). While the entries in the second and third columns provide at least cursory explanations, we confine ourselves to a more elaborate discussion of only two principles.

Because no test or diagnostic instrument is perfectly reliable, the diagnostic inference process takes place under uncertainty, relying on probabilities rather than fixed deterministic rules. The basic uncertainty of the statistical relation between a disease (D) and a diagnostic measure or test result (T) is complicated by the fallible asymmetry of conditional reasoning. The diagnostic uncertainty of inferring a disease D from a diagnostic test T, as specified by the conditional probability p(D|T) of D given T, is typically much higher than the causal uncertainty of the reverse conditional probability p(T|D) of T given D. For example, the probability of a positive mammogram given breast cancer is higher by magnitudes than the probability of breast cancer given a positive mammogram (Gigerenzer & Hoffrage, 1995).

There are two major reasons for this dangerous asymmetry, which is at the heart of many diagnostic shortcomings and cognitive illusions in social psychology (Gavanski & Hui, 1992). First, expert knowledge typically consists of causal conditionals. Given the extended experience with patients that are a priori-categorized into different categories (D), like depression, borderline, or schizophrenia, expert knowledge refers to the probabilities of symptoms, test scores, and behavioral criteria (T) given patients belong to these categories, which corresponds to p(T|D). Experts’ vocational experience is hardly ever the other way around, such that patients’ test scores or behavioral measures are given first and the likelihood p(D|T) of disorders contingent on the given tests or measures. The application of tests and diagnostic examinations is rather contingent on patients’ categorization by the disorder. Clinical training and experience hardly ever provide opportunities to learn the probability of disorders given test scores. It is, therefore, more natural and convenient to estimate test values contingent on disorders, p(T|D) than to estimate disorders contingent on test values, p(D|T) (Fiedler, 2010; Gavanski & Hui, 1992).

The second reason for the confusion is that conditional probabilities p(D|T) and p(T|D) can diverge dramatically. In most naturally occurring cases, the diagnostic conditional p(D|T) is much smaller than the reverse causal conditional p(T|D). Both reasons together exacerbate the uncertainty of diagnostic inferences to get a grasp of the whole probabilistic picture.

Asymmetry of Diagnostic and Causal Conditionals

Let us illustrate this intricate problem with a prominent example. The causal probability p(positive HIV test|HIV) of obtaining a positive HIV test result given that test persons are infected by the HIV is virtually 100%. In contrast, the reverse probability p(HIV|positive HIV test) of a diagnostic inference of HIV from a positive test result is less than 10% (Swets et al., 2000). Thus, although the hit rate of an HIV test is virtually perfect, a positive test result provides only weak evidence for diagnosing HIV. The same counterintuitive asymmetry holds for many other diagnostic situations. The likelihood is high that a mammogram is positive given breast cancer (Gigerenzer & Hoffrage, 1995), that an Implicit Association Test score is high if the respondent is prejudiced (Fiedler, 2010), or that a perpetrator will be identified by an eyewitness if they are guilty (Wells & Olson, 2003). However, the reverse (diagnostic) likelihood (of breast cancer, prejudice, and guilt given a positive mammogram, a significant IAT score, and an eyewitness identification, given a suggestive test outcome) is typically much lower.

To visualize this counter-intuitive asymmetry, consider the set relations in Figure 1. Many diagnostic tests are designed to minimize false negatives, that is, not to miss an actual case of HIV, breast cancer, prejudice, or a guilty perpetrator. The opposite case of a false-positive error appears less expensive, if only for liability reasons. There are manifold opportunities to correct later for unduly diagnosed HIV, breast cancer, prejudice, and guilt. In any case, the set size or base-rate of T often exceeds the set size or base-rate of D, in a wide variety of diagnostic settings, in clinical and health psychology, but also in basic and applied social psychology, legal and political science, risk assessment, and attitude measurement (Fiedler, 2010).

Figure 1 Graphical illustration of the asymmetry of causal inferences (from D to T) and diagnostic inferences (from T to D), due to unequal set size (or base-rate of T and D).

This asymmetry of conditional inferences constitutes a truly interdisciplinary problem to be fixed and controlled by the same tool. Avoiding the so-called base-rate fallacy is equally challenging for social and clinical psychologists. Being prepared for the Bayesian mathematics of diagnostic inferences is similarly essential an asset of a good clinician as the ability to listen to patients, to build up trusting relationships, and experience with multiple tests, review skills, and interview techniques. Indeed, all these skills can be in vain if the clinician’s toolbox does not contain a remedy to base-rate fallacies (or other pitfalls of diagnostic judgment).

Even better than merely possessing a tool is the ability to use it competently. According to Bayes’ theorem, the ratio of two reverse conditional probabilities is equivalent to the ratio of base-rates: p(D|T)/p(T|D) = p(D)/p(T). Thus, if the likelihood of a positive test result p(T) is ten times the likelihood p(D) of the disease (as shown in Figure 1), the diagnostic probability p(D|T) is only 1/10 of the causal probability p(T|D) – a truly impressive asymmetry!

Although the so-called base-rate fallacy has been a prominent topic in social and cognitive psychology since the seminal work by Tversky and Kahneman (1971, 1974), its relevance continues to be neglected. The base rate of attention deficit hyperactivity disorder (ADHD) is much lower than the base rate of occasional inattention; depression is less prevalent than depressive states; schizophrenia is rare relative to surrealistic word associations, and suicide is less likely than black humor. Because clinical practice is replete with unequal base rates and asymmetric conditionals, some basic understanding of the Bayes theorem is an integral part of clinicians’ toolbox. Although many practitioners would claim that they deliberately decided not to study statistics, mathematics, or set theory, the nature of their vocational domain calls for an understanding of such fundamental scientific issues as a base-rate fallacy (Tversky & Kahneman, 1974), the regression trap (Campbell, 1996), and the vicissitudes of test theory (Astivia et al., 2020).

COVID-19 – A Paradigm for Assessing and Communicating Health Risk

The need to consult patients and the public on how to cope with the current pandemic certainly constitutes a litmus test of diagnostic competence. Hardly any health-related topic has occupied health and life sciences to a similar degree as the danger of COVID-19 infection and mortality. A very basic task for a scientific consultant is therefore to understand and communicate the statistics presented every day in the newspapers.

However, even simple statistics such as infection rates (i.e., the number of positively tested people divided by the number of all people who underwent a test) or mortality rates (i.e., the number of positively tested corona cases who died divided by the number of all positively tested people) are subject to serious cognitive biases. Even educated and statistically trained people fall prey to a ratio bias (Denes-Raj & Epstein, 1994) or denominator neglect (Reyna & Brainerd, 2008), giving more weight to the numerator than the denominator of a frequency ratio. Just as people prefer a 10/100 lottery (offering 10 wins out of 100 balls in an urn) to a 1/10 lottery because the larger lottery offers more chances to win (in the numerator), the same corona infection rate looms larger if the reference set is large rather than small. Due to this denominator neglect, the habit of publishing absolute statistics of the number of infections and dead people in the newspapers induced serious cognitive biases. Kuhbandner (2020) found that “the rapid increase in reported new infections was largely attributable to the rapid increase in conducted tests” (p. 1). Even educated people mistook the raw number of infections for a probability. Until late in autumn 2020, journalists, politicians, and health professionals pointed to a dangerous increase in the rate of infections although the ratio of infected to all tested people was remarkably constant, until the 40th week of the year (see Table 3). Although the ratio increased drastically in the following weeks, the failure to discriminate between basic descriptive statistics constitutes a serious deficit that may undermine the credibility of and the trust in the corona policies. The understanding and the mastery of even the simplest statistical indices can be an extremely important precondition for effective and responsible risk communication in clinical practice.

Table 3 Illustrating the trouble with absolute and relative statistics in risk assessment with German Corona data. The seemingly strong increase in absolute numbers of corona infections from week 20/2020 to 38/2020 is largely due to an increasing number of tests. Afterward, the number of tests remained rather constant but the ratio of positive tests increased by magnitudes. Data from: https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Daten/Testzahlen-gesamt.html

A frequently cited but premature assumption is that statistics expressed as natural frequencies (cardinal numbers) offer an appropriate and easy to understand means to express frequentist statistics. The ratio bias and related research (Fiedler et al., 2016; Smith & Price, 2010) highlights the fact that absolute frequencies can be as bias-prone as normalized probabilities. In any case, modest logical and methodological tools in the clinician’s toolbox are not just needed occasionally. They rather constitute a ubiquitous requirement for all diagnostic reasoning. For clinicians doing a professional responsible job as advisors and expert consultants, it is absolutely essential to be in full control of these basic tools.

The recent discourse on COVID-19 is just one example of the importance of correct diagnostic and prognostic judgments. Another provocative example is the crucial role of clinicians working as expert witnesses in legal trials. A critically-minded expert witness must understand that even though a matching DNA test is to be expected only once in a million cases, including ten million people in a dragnet investigation means nine out of ten positively tested suspects are innocent (Hoffrage et al., 2000). Or, if diagnosing credibility relies on criteria-based statement analysis (Vrij, 2005), counting the number of linguistic truth criteria in a witness’ verbal report (Steller & Koehnken, 1989), they must be sensitized for the obvious fact that many truth criteria are more likely to be found in a long than in a short protocol.

A similar point could be made for the diagnostic relevance of other methods tools listed in Table 1, such as regression to the mean (Fiedler & Unkelbach, 2014), unpacking illusions due to subadditivity (Tversky & Koehler, 1994), or sampling biases (Fiedler & Kutzner, 2015). Although most of these examples are not peculiar to social psychology but represent basic issues for scientific work in all areas, the resulting shortcomings were typically illustrated with reference to social-psychological, societal, ethical, organizational, legal, or moral examples. Indeed, social psychology is often playing an enlightenment and emancipation function in the interdisciplinary discourse. Yet, when we now turn to the second compartment of the clinician’s toolbox, containing causal influence tools that can enrich the clinician’s therapeutic repertoire (Table 2), the key role of social psychology as a dispenser of useful tools for effective therapeutic interventions will be more salient.

Evidence-Based Tools to Support Therapeutic Interventions

Although clinical and social psychology is undoubtedly distinct domains, they are intrinsically related and research domains in one partner discipline can be mapped onto corresponding domains in the other discipline. For instance, both psychotherapy and persuasion constitute special cases of social influence making (Cialdini, 2016). Conversely, psychotherapy affords a prominent applied domain for studying social influence. The etiology of depression and other disorders can be translated into an attribution process (Försterling & Harrow, 1988; Storms & Nisbett, 1970; Wilson et al., 2002). Conditioning constitutes an experimental analog of social learning and behavior therapy (Thorpe & Olson, 1997). Self-control is a prime predictor of health and convalescence (Tangney et al., 2004), and self-disclosure is a key to coping and immune-system strengthening (Pennebaker et al., 1988). The relationship between patient and therapist draws heavily on rules of communication (Baldini, 2011; Watzlawick, 1993). The synergy of social and clinical research in such diverse areas as partnership problems, self-disclosure, group dynamics, attribution and self-blame, self-control, and feelings of insufficiency versus self-efficacy is striking.

Because of this naturally existing overlap, it is no surprise that social psychology can offer many causal-influence tools to be exploited by open-minded clinicians. Again, the examples in Table 2 are far from being exhaustive, nor are they standardized. The generic idea of an adaptive toolbox entails an invitation for individual clinicians or therapists to supplement and optimize the toolbox by their profile or personal competencies. The overall value of the collective toolbox should profit from the diversity of individually composed toolbox versions.

Dissonance Theory and Effort Justification

Our first prominent example of a powerful theoretical tool is effort justification, originating in one of social psychology’s oldest and most well-known theories, namely, Festinger’s (1962) theory of cognitive dissonance. It is a telling example of a well-established finding, resting on a clearly articulated causal principle, that applies to various domains, such as school teaching (Lepper et al., 1986; McGrath, 2020), personnel management (Frey, 1992; Maertz & Boyar, 2012), and psychotherapy.

Granting that patient cooperation and motivation is a chief determinant of therapy success, a therapist might do everything to win the patient’s intrinsic motivation. Dissonance theory speaks directly to intrinsic motivation, which can be induced through insufficient effort expenditure. For instance, consumers like those products most for which they have paid the most money, or those students who had to struggle most to be admitted to their favorite major are most intrinsically motivated (Jang et al., 2017). Conversely, providing extrinsic incentives and even reducing difficulties can undermine intrinsic motivation (Lepper et al., 1986). Lawrence and Festinger (1962) devoted an entire monograph to demonstrating the validity of the effort under-justification in animal learning, which is replete with evidence for the notion that intrinsic motivation increases when the ratio of reward to effort expenditure decreases. Animals internalize effortful actions that are rarely met with reward, not the other way around.

In a clever attempt to exploit this universal law of learning, Axsom and Cooper (1985) tested the hypothesis that making psychotherapy more effortful can increase therapeutic success. Patients in a therapy that aimed at weight loss were randomly assigned to three experimental conditions, high effort expenditure, low effort expenditure, and a control condition without the therapy treatment. The effort expenditure manipulation only referred to extraneous cognitive tasks that were unrelated to the weight-loss therapy and any popular theory about weight loss. As expected, high effort participants had lost slightly more weight after three weeks, but the therapeutic advantage of high effort expenditure was especially apparent after 6 months.

We acknowledge that Axsom and Cooper’s (1985) results can probably not be replicated in each therapy with each patient, and since then the regulation of intrinsic and extrinsic motivation can be rather complex (Henderlong & Lepper, 2002). However, this example highlights a more general asset of combining therapy with learning and motivation. Further examples can be found in the literature on attribution (Metalsky et al., 1995), social comparison (Leahey et al., 2007), or self-enhancement (Sedikides & Alicke, 2012).

To be sure, the principle of effort justification does not afford a tool that can fully replace other therapeutic techniques or causal principles, such as learning rules or linguistic principles of communication. It is sufficient for a useful tool to serve a supplementary function, such as increasing patients’ intrinsic motivation, which constitutes a crucial moderator of effective therapy. This is an important property of tools; they typically cannot solve the entire problem. Yet, their advantage is to make life easier, to facilitate the solution of problems that call for the synergy of different means and strategies.

Implementation Intentions

Another candidate for a clinician’s toolbox would be Gollwitzer and Sheeran’s (2006) notion of implementation intentions that refers to prospective-memory instructions of the “if, then” type (“If I feel insulted, then I respond with a humorous remark”). Implementing if-then instructions can support many therapeutic goals, related to smoking or drug addiction (Ford & Hill, 2012), conflict escalation or aggressive episodes in partnerships, or volitional control over impulsive (compulsions, hostile behavior, etc.) tendencies.

Implementation intentions – pre-compiled behavioral routines that only wait for an if-signal to be executed quasi-automatically – are particularly useful to support treatment schedules and regular therapeutic exercises. The same holds for related prospective-memory tools (McDaniel & Einstein, 2007). Again, this example highlights that useful tools need not play a major role in the therapeutic process. They may serve a very useful function if they merely create auspicious boundary conditions for the effectiveness of interventions drawing on other causal principles.

Social Psychology and the Immune System

If therapy is not restricted to repairing a symptom or disease but also serves prophylactic training and learning functions, fostering the immune system may be a relevant aim of wise psychotherapy. A strong and healthy immune system not only shields the body from infection, inflammation, and physiological dysregulation. It also affords a catalyst for interventions aiming at self-enhancement, personal satisfaction, or coping with ostracism (Williams, 2007; Rudert et al., 2020). But how can therapists influence the immune system? Apart from common-place answers referring to genetics, nutrition, and poisonous substances, social psychologists found that it is possible to foster the immune system through behavioral interventions.

In a seminal study, Pennebaker (1989) showed that self-disclosure and revelation of secrets can have a sustainable effect on the immune system. After people had written about their secrets and traumas under experimental conditions, their standard immune indicators improved. When participants wrote freely about their emotions, Petrie et al. (1998) found a beneficial increase in circulating total lymphocytes and CD4 (helper) T lymphocyte levels. In contrast, when they engaged in thought suppression, CD3 T lymphocyte levels decreased.

Inspired by this pioneering work, subsequent research found intriguing evidence for the negative impact of stress on the immune system (Khansari et al., 1990) but also for the positive influence of positive emotional states (Dillon et al., 1986) and religious practices (Woods et al., 1999). Recent work by Schaller and Park (2011) showed that disgust as an evolution-based emotional cue can strengthen immune reactions. A meta-analysis by Oaten et al. (2009) supports this influence of disgust embedded in a disease-avoidance mechanism. We propose that such advances in studying the interplay of social and biological processes deserve to be included in a clinician’s repertoire. A state-of-the-art toolbox should take the immune system into account (West et al., 2020), in the interdisciplinary collaboration with clinical and social psychologists and other life scientists.

Other tools could enrich the clinician’s repertoire, although they are detached from a standard clinical training curriculum. For instance, in their demonstration of prudent default settings, Johnson and Goldstein (2003) showed organ donation is easy in those counties (like France) in which the default is that all citizens are organ donator, unless they declare their unwillingness to donate. In countries with the opposite default (like Germany), the prevalence of organ donation is lower by magnitudes. Using clever default settings for therapeutic purposes represents a challenge for open-minded therapists, just as exploiting priming effects (Damen et al., 2014), social learning, and social comparisons (Baldwin & Mussweiler, 2018). A tool for improving the quality of interview data affords Fisher and Geiselman’s (1992) cognitive interview, which allows interviewees to respond in their own words, in free-response format, relying on their retrieval structures. Therapists might enrich their treatment repertoire with empirical insights on self-control (Tangney et al., 2004), narcissism (Baumeister et al., 1996), and obesity (Mata et al., 2010).

Concluding Remarks

Although a historical sketch reveals that social and clinical psychology were not interested in developing a joint discipline, based on integrative theorizing, there is rich overlap and fertile collaboration at the shirt-sleeved level of a toolbox, conceived as a repertoire shared by behavioral sciences, and by social and clinical psychology in particular. While the notion of a toolbox is by definition super-disciplinary, building on the versatility of tools that can be used for any purpose, it seems justified to refer to a “social-psychological toolbox for clinical psychology,” for social psychologists have often pointed out the broader meaning of theoretical and logical principles. We have emphasized several useful properties of an adaptive toolbox. It is incomplete and open for improvisation, rather than comprehensive and standardized. It offers tools that facilitate the scientist’s and practitioner’s life, rather than offering complete solutions. It embraces the ability to learn in the light of novel evidence, and it frees the user from the obligation to understand the theoretical underpinnings of the tools, which are available as fixed modules. Tools testify to the fortunate fact that not every wheel has to be invented again and again; tools highlight the possibility of a free ride on other scientists’ inventions.

References

  • Abdullah, T., & Brown, T. L. (2020). Diagnostic labeling and mental illness stigma among Black Americans: An experimental vignette study. Stigma and Health, 5(1), 11–21. https://doi.org/10.1037/sah0000162 First citation in articleCrossrefGoogle Scholar

  • Abe, H., Imai, S., & Nedate, K. (2011). Effects of fixed-role therapy applying decision-making theory on social anxiety. Japanese Journal of Counseling Science, 44(1), 1–9. https://doi.org/10.11544/cou.44.1 First citation in articleGoogle Scholar

  • Alloy, L. B., Peterson, C., Abramson, L. Y., & Seligman, M. E. (1984). Attributional style and the generality of learned helplessness. Journal of Personality and Social Psychology, 46(3), 681–687. https://doi.org/10.1037/0022-3514.46.3.681 First citation in articleCrossrefGoogle Scholar

  • Astivia, O. L. O., Kroc, E., & Zumbo, B. D. (2020). The role of item distributions on reliability estimation: The case of Cronbach’s coefficient alpha. Educational & Psychological Measurement, 80(5), 825–846. https://doi.org/10.1177/0013164420903770 First citation in articleCrossrefGoogle Scholar

  • Axsom, D., & Cooper, J. (1985). Cognitive dissonance and psychotherapy: The role of effort justification in inducing weight loss. Journal of Experimental Social Psychology, 21(2), 149–160. https://doi.org/10.1016/0022-1031(85)90012-5 First citation in articleCrossrefGoogle Scholar

  • Baldini, F. (2011). Communication in depressive states. In M. RimondiniEd., Communication in cognitive behavioral therapy (pp. 129–147). Springer Science + Business Media. First citation in articleCrossrefGoogle Scholar

  • Baldwin, M., & Mussweiler, T. (2018). The culture of social comparison. Proceedings of the National Academy of Sciences of the United States of America, 115(39), E9067–E9074. https://doi.org/10.1073/pnas.1721555115 First citation in articleCrossrefGoogle Scholar

  • Baumeister, R. F., Smart, L., & Boden, J. M. (1996). Relation of threatened egotism to violence and aggression: The dark side of high self-esteem. Psychological Review, 103(1), 5–33. https://doi.org/10.1037/0033-295x.103.1.5 First citation in articleCrossrefGoogle Scholar

  • Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. MetcalfeA. P. ShimamuraJ. MetcalfeA. P. ShimamuraEds., Metacognition: Knowing about knowing (pp. 185–205). MIT Press. First citation in articleGoogle Scholar

  • Bolinger, D. (1973). Truth is a linguistic question. Language, 49(3), 539–550. https://doi.org/10.2307/412350 First citation in articleCrossrefGoogle Scholar

  • Cahalan, S. (2019). The great pretender: The under-cover mission that changed our understanding of madness. Grand Central Publishing. First citation in articleGoogle Scholar

  • Campbell, D. T. (1996). Regression artifacts in time-series and longitudinal data. Evaluation and Program Planning, 19(4), 377–389. https://doi.org/10.1016/S0149-7189(96)00025-0 First citation in articleCrossrefGoogle Scholar

  • Cialdini, R. (2016). Pre-suasion: A revolutionary way to influence and persuade. Simon & Schuster. First citation in articleGoogle Scholar

  • Damen, T. G. E., van Baaren, R. B., & Dijksterhuis, A. (2014). You should read this! Perceiving and acting upon action primes influences one’s sense of agency. Journal of Experimental Social Psychology, 50, 21–26. https://doi.org/10.1016/j.jesp.2013.09.003 First citation in articleCrossrefGoogle Scholar

  • Denes-Raj, V., & Epstein, S. (1994). Conflict between intuitive and rational processing: When people behave against their better judgment. Journal of Personality and Social Psychology, 66(5), 819–829. https://doi.org/10.1037/0022-3514.66.5.819 First citation in articleCrossrefGoogle Scholar

  • Dillon, K. M., Minchoff, B., & Baker, K. H. (1986). Positive emotional states and enhancement of the immune system. The International Journal of Psychiatry in Medicine, 15(1), 13–18. https://doi.org/10.2190/R7FD-URN9-PQ7F-A6J7 First citation in articleCrossrefGoogle Scholar

  • Downey, G., Freitas, A. L., Michaelis, B., & Khouri, H. (1998). The self-fulfilling prophecy in close relationships: Rejection sensitivity and rejection by romantic partners. Journal of Personality and Social Psychology, 75(2), 545–560. https://doi.org/10.1037/0022-3514.75.2.545 First citation in articleCrossrefGoogle Scholar

  • Festinger, L. (1962). A theory of cognitive dissonance. Stanford University Press. First citation in articleGoogle Scholar

  • Fiedler, K. (2010). The asymmetry of causal and diagnostic inferences: A challenge for the study of implicit attitudes. In J. C. ForgasW. D. CranoEds., The psychology of attitudes and attitude change (pp. 75–92). Psychology Press. First citation in articleGoogle Scholar

  • Fiedler, K., Kareev, Y., Avrahami, J., Beier, S., Kutzner, F., & Hütter, M. (2016). Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions. Memory & Cognition, 44(1), 143–161. https://doi.org/10.3758/s13421-015-0537-z First citation in articleCrossrefGoogle Scholar

  • Fiedler, K., & Kutzner, F. (2015). Information sampling and reasoning biases: Implications for research in judgment and decision making. The Wiley Blackwell handbook of judgment and decision making, 380–403. https://doi.org/10.1002/9781118468333.ch13 First citation in articleCrossrefGoogle Scholar

  • Fiedler, K., & Unkelbach, C. (2014). Regressive judgment: Implications of a universal property of the empirical world. Current Directions in Psychological Science, 23(5), 361–367. https://doi.org/10.1177/0963721414546330 First citation in articleCrossrefGoogle Scholar

  • Fincham, F. D., & Bradbury, T. N. (1992). Assessing attributions in marriage: The Relationship Attribution Measure. Journal of Personality and Social Psychology, 62(3), 457–468. https://doi.org/10.1037/0022-3514.62.3.457 First citation in articleCrossrefGoogle Scholar

  • Fisher, R. P., & Geiselman, R. E. (1992). Memory enhancing techniques for investigative interviewing: The Cognitive Interview. Charles C Thomas. First citation in articleGoogle Scholar

  • Försterling, F., & Harrow, J. T. (1988). Attribution theory in clinical psychology. Wiley. First citation in articleGoogle Scholar

  • Ford, J. A., & Hill, T. D. (2012). Religiosity and adolescent substance use: Evidence from the National Survey on Drug Use and Health. Substance Use & Misuse, 47, 787–798. https://doi.org/10.3109/10826084.2012.667489 First citation in articleCrossrefGoogle Scholar

  • Frey, B. S. (1992). Tertium datur: Pricing, regulating and intrinsic motivation. Kyklos, 45(2), 161–184. https://doi.org/10.1111/j.1467-6435.1992.tb02112.x First citation in articleCrossrefGoogle Scholar

  • Garb, H. N. (1994). Cognitive heuristics and biases in personality assessment. In L. HeathR. S. TindaleJ. EdwardsE. J. PosavacF. B. BryantE. Henderson-KingY. Suarez-BalcazarJ. MyersEds., Applications of heuristics and biases to social issues (pp. 73–90). Plenum Press. First citation in articleCrossrefGoogle Scholar

  • Garb, H. N. (2010). The social psychology of clinical judgment. In J. E. MadduxJ. P. TangneyEds., Social psychological foundations of clinical psychology (pp. 297–311). Guilford Press. First citation in articleGoogle Scholar

  • Gavanski, I., & Hui, C. (1992). Natural sample spaces and uncertain belief. Journal of Personality and Social Psychology, 63(5), 766–780. https://doi.org/10.1037/0022-3514.63.5.766 First citation in articleCrossrefGoogle Scholar

  • Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instructions: Frequency formats. Psychological Review, 102, 684–704. https://doi.org/10.1037/0033-295X.102.4.684 First citation in articleCrossrefGoogle Scholar

  • Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in Experimental Social Psychology, 38, 69–119. https://doi.org/10.1016/S0065-2601(06)38002-1 First citation in articleCrossrefGoogle Scholar

  • Henderlong, J., & Lepper, M. R. (2002). The effects of praise on children’s intrinsic motivation: A review and synthesis. Psychological Bulletin, 128(5), 774–795. https://doi.org/10.1037/0033-2909.128.5.774 First citation in articleCrossrefGoogle Scholar

  • Hoffrage, U., Lindsey, S., Hertwig, R., & Gigerenzer, G. (2000). Communicating statistical information. Science, 290(5500), 2261–2262. https://doi.org/10.1126/science.290.5500.2261 First citation in articleCrossrefGoogle Scholar

  • Horley, J. (2006). Personal construct psychotherapy: Fixed-role therapy with forensic clients. Journal of Sexual Aggression, 12(1), 53–61. https://doi.org/10.1080/13552600600673596 First citation in articleCrossrefGoogle Scholar

  • Jang, S., Prasad, A., & Ratchford, B. T. (2017). Consumer search of multiple information sources and its impact on consumer price satisfaction. Journal of Interactive Marketing, 40, 24–40. https://doi.org/10.1016/j.intmar.2017.06.004 First citation in articleCrossrefGoogle Scholar

  • Johnson, E. J., & Goldstein, D. G. (2003). Do defaults save lives? Science, 302(5649), 1338–1339. https://doi.org/10.1126/science.1091721 First citation in articleCrossrefGoogle Scholar

  • Kelly, G. A. (1955). The psychology of personal constructs. Vol. 1. A theory of personality. Vol. 2. Clinical diagnosis and psychotherapy. W. W. Norton. First citation in articleGoogle Scholar

  • Kelly, G. A. (1973). Fixed role therapy. In R. M. JerevichEd., Direct psychotherapy: 28 American originals (pp. 394–422). University of Miami Press. First citation in articleGoogle Scholar

  • Khansari, D. N., Murgo, A. J., & Faith, R. E. (1990). Effects of stress on the immune system. Immunology today, 11, 170–175. https://doi.org/10.1016/0167-5699(90)90069-L First citation in articleCrossrefGoogle Scholar

  • Kuhbandner, C. (2020). The scenario of a pandemic spread of the coronavirus SARS-CoV-2 is based on a statistical fallacy. Advance. Preprint. https://doi.org/10.31124/advance.12151962.v1 First citation in articleGoogle Scholar

  • Lawrence, D. H., & Festinger, L. (1962). Deterrents and reinforcement: The psychology of insufficient reward. Stanford University Press. First citation in articleGoogle Scholar

  • Leahey, T. M., Crowther, J. H., & Mickelson, K. D. (2007). The frequency, nature, and effects of naturally occurring appearance-focused social comparisons. Behavior Therapy, 38(2), 132–143. https://doi.org/10.1016/j.beth.2006.06.004 First citation in articleCrossrefGoogle Scholar

  • Lepper, M. R., Ross, L., & Lau, R. R. (1986). Persistence of inaccurate beliefs about the self: Perseverance effects in the classroom. Journal of Personality and Social Psychology, 50(3), 482–491. https://doi.org/10.1037/0022-3514.50.3.482 First citation in articleCrossrefGoogle Scholar

  • Lewin, K. (1947). Frontiers in group dynamics: Concept, method and reality in social science; social equilibria and social change. Human Relations, 1, 5–41. https://doi.org/10.1177/001872674700100103 First citation in articleCrossrefGoogle Scholar

  • Lucy, J. A. (2016). Recent advances in the study of linguistic relativity in historical context: A critical assessment. Language Learning, 66(3), 487–515. https://doi.org/10.1111/lang.12195 First citation in articleCrossrefGoogle Scholar

  • Maddux, J. E., & Tangney, J. P. (2010). Social psychological foundations of clinical psychology. Guilford Press. First citation in articleGoogle Scholar

  • Maertz, C. P. Jr., & Boyar, S. L. (2012). Theory‐driven development of a comprehensive turnover‐attachment motive survey. Human Resource Management, 51(1), 71–98. https://doi.org/10.1002/hrm.20464 First citation in articleCrossrefGoogle Scholar

  • Martinez, A. G., Piff, P. K., Mendoza-Denton, R., & Hinshaw, S. P. (2011). The power of a label: Mental illness diagnoses, ascribed humanity, and social rejection. Journal of Social and Clinical Psychology, 30(1), 1–23. https://doi.org/10.1521/jscp.2011.30.1.1 First citation in articleCrossrefGoogle Scholar

  • Mata, J., Todd, P. M., & Lippke, S. (2010). When weight management lasts: Lower perceived rule complexity increases adherence. Appetite, 54(1), 37–43. https://doi.org/10.1016/j.appet.2009.09.004 First citation in articleCrossrefGoogle Scholar

  • McDaniel, M. A., & Einstein, G. O. (2007). Prospective memory: An overview and synthesis of an emerging field. Sage Publications. First citation in articleGoogle Scholar

  • McGrath, A. (2020). Bringing cognitive dissonance theory into the scholarship of teaching and learning: Topics and questions in need of investigation. Scholarship of Teaching and Learning in Psychology, 6(1), 84–90. https://doi.org/10.1037/stl0000168 First citation in articleCrossrefGoogle Scholar

  • Mead, G. H. (1934). Introduction. In C. M. MorrisEd., Mind, self, and society: From the standpoint of a social behaviorist. University of Chicago Press. First citation in articleGoogle Scholar

  • Metalsky, G. I., Laird, R. S., Heck, P. M., & Joiner, T. E. Jr. (1995). Attribution theory: Clinical applications. In W. T. O’DonohueL. KrasnerEds., Theories of behavior therapy: Exploring behavior change (pp. 385–413). American Psychological Association. https://doi.org/10.1037/10169-014 First citation in articleCrossrefGoogle Scholar

  • Mulligan, N. W., & Lozito, J. P. (2004). Self-generation and memory. In B. H. RossEd., The psychology of learning and motivation (pp. 175–214). Elsevier Academic Press. First citation in articleCrossrefGoogle Scholar

  • Oaten, M., Stevenson, R. J., & Case, T. I. (2009). Disgust as a disease-avoidance mechanism. Psychological Bulletin, 135(2), 303–321. https://doi.org/10.1037/a0014823 First citation in articleCrossrefGoogle Scholar

  • Pennebaker, J. W., Kiecolt-Glaser, J. K., & Glaser, R. (1988). Disclosure of traumas and immune function: health implications for psychotherapy. Journal of Consulting and Clinical Psychology, 56(2), 239–245. https://doi.org/10.1037/0022-006X.56.2.239 First citation in articleCrossrefGoogle Scholar

  • Pennebaker, J. W. (1989). Confession, inhibition, and disease. Advances in Experimental Social Psychology, 22, 211–244. https://doi.org/10.1016/S0065-2601(08)60309-3 First citation in articleCrossrefGoogle Scholar

  • Petrie, K. J., Booth, R. J., & Pennebaker, J. W. (1998). The immunological effects of thought suppression. Journal of Personality and Social Psychology, 75(5), 1264–1272. https://doi.org/10.1037/0022-3514.75.5.1264 First citation in articleCrossrefGoogle Scholar

  • Reyna, V. F., & Brainerd, C. J. (2008). Numeracy, ratio bias, and denominator neglect in judgments of risk and probability. Learning and Individual Differences, 18(1), 89–107. https://doi.org/10.1016/j.lindif.2007.03.011 First citation in articleCrossrefGoogle Scholar

  • Rosenhan, D. L. (1973). On being sane in insane places. Science, 179(4070), 250–258. https://doi.org/10.1126/science.179.4070.250 First citation in articleCrossrefGoogle Scholar

  • Rosenthal, R. (2002). Covert communication in classrooms, clinics, courtrooms, and cubicles. American Psychologist, 57(11), 839–849. https://doi.org/10.1037/0003-066X.57.11.839 First citation in articleCrossrefGoogle Scholar

  • Rudert, S. C., Janke, S., & Greifeneder, R. (2020). The experience of ostracism over the adult life span. Developmental Psychology, 56(10), 1999–2012. https://doi.org/10.1037/dev0001096 First citation in articleCrossrefGoogle Scholar

  • Ruscio, J. (2004). Diagnoses and the behaviours they denote. The Scientific Review of Mental Health Practice, 3(1), 1–5. First citation in articleGoogle Scholar

  • Schaller, M., & Park, J. H. (2011). The behavioral immune system (and why it matters). Current Directions in Psychological Science, 20(2), 99–103. https://doi.org/10.1177/0963721411402596 First citation in articleCrossrefGoogle Scholar

  • Scheff, T. J. (1974). The labelling theory of mental illness. American Sociological Review, 39(4), 444–452. https://doi.org/10.2307/2094300 First citation in articleCrossrefGoogle Scholar

  • Sedikides, C., & Alicke, M. D. (2012). Self-enhancement and self-protection motives. In R. M. RyanEd., Oxford library of psychology. The Oxford handbook of human motivation (pp. 303–322). Oxford University Press. First citation in articleGoogle Scholar

  • Simpson, E. H. (1951). The interpretation of interaction in contingency tables. Journal of the Royal Statistical Society: Series B (Methodological), 13(2), 238–241. https://doi.org/10.1111/j.2517-6161.1951.tb00088.x First citation in articleCrossrefGoogle Scholar

  • Smith, A. R., & Price, P. C. (2010). Sample size bias in the estimation of means. Psychonomic Bulletin & Review, 17(4), 499–503. https://doi.org/10.3758/PBR.17.4.499 First citation in articleCrossrefGoogle Scholar

  • Snyder, M. (1984). When belief creates reality. Advances in Experimental Social Psychology, 18, 247–305. https://doi.org/10.1016/S0065-2601(08)60146-X First citation in articleCrossrefGoogle Scholar

  • Steller, M., & Koehnken, G. (1989). Criteria-based statement analysis. In D. C. RaskinEd., Psychological methods in criminal investigation and evidence (pp. 217–245). Springer. First citation in articleGoogle Scholar

  • Storms, M. D., & Nisbett, R. E. (1970). Insomnia and the attribution process. Journal of Personality and Social Psychology, 16(2), 319–328. https://doi.org/10.1037/h0029835 First citation in articleCrossrefGoogle Scholar

  • Swets, J. A., Dawes, R. M., & Monahan, J. (2000). Psychological science can improve diagnostic decisions. Psychological Science in the Public Interest, 1(1), 1–26. https://doi.org/10.1111/1529-1006.001 First citation in articleCrossrefGoogle Scholar

  • Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self‐control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72(2), 271–322. https://doi.org/10.1111/j.0022-3506.2004.00263.x First citation in articleCrossrefGoogle Scholar

  • Thorpe, G. L., & Olson, S. L. (1997). Behavior therapy: Concepts, procedures, and applications. Allyn & Bacon. First citation in articleGoogle Scholar

  • Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological Bulletin, 76(2), 105–110. https://doi.org/10.1037/h0031322 First citation in articleCrossrefGoogle Scholar

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 First citation in articleCrossrefGoogle Scholar

  • Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101(4), 547–567. https://doi.org/10.1037/0033-295X.101.4.547 First citation in articleCrossrefGoogle Scholar

  • Vrij, A. (2005). Criteria-based content analysis: A qualitative review of the first 37 studies. Psychology, Public Policy, and Law, 11(1), 3–41. https://doi.org/10.1037/1076-8971.11.1.3 First citation in articleCrossrefGoogle Scholar

  • Watzlawick, P. (1993). The language of change: Elements of therapeutic communication. W. W. Norton. First citation in articleGoogle Scholar

  • Wells, G. L., & Olson, E. A. (2003). Eyewitness testimony. Annual Review of Psychology, 54, 277–295. https://doi.org/10.1146/annurev.psych.54.101601.145028 First citation in articleCrossrefGoogle Scholar

  • West, R., Michie, S., Rubin, G. J., & Amlôt, R. (2020). Applying principles of behaviour change to reduce SARS-CoV-2 transmission. Nature Human Behaviour, 4(5), 451–459. https://doi.org/10.1038/s41562-020-0887-9 First citation in articleCrossrefGoogle Scholar

  • Williams, K. D. (2007). Ostracism. Annual Review of Psychology, 58, 425–452. https://doi.org/10.1146/annurev.psych.58.110405.085641 First citation in articleCrossrefGoogle Scholar

  • Wilson, T. D., Damiani, M., & Shelton, N. (2002). Improving the academic performance of college students with brief attributional interventions. In J. AronsonEd., Improving academic achievement: Impact of psychological factors on education (pp. 89–108). Academic Press. First citation in articleGoogle Scholar

  • Woods, T. E., Antoni, M. H., Ironson, G. H., & Kling, D. W. (1999). Religiosity is associated with affective and immune status in symptomatic HIV-infected gay men. Journal of Psychosomatic Research, 46(2), 165–176. https://doi.org/10.1177/135910539900400302 First citation in articleCrossrefGoogle Scholar