Skip to main content
Open AccessResearch Article

Illusion of Control

The Role of Personal Involvement

Published Online:https://doi.org/10.1027/1618-3169/a000225

Abstract

Abstract. The illusion of control consists of overestimating the influence that our behavior exerts over uncontrollable outcomes. Available evidence suggests that an important factor in development of this illusion is the personal involvement of participants who are trying to obtain the outcome. The dominant view assumes that this is due to social motivations and self-esteem protection. We propose that this may be due to a bias in contingency detection which occurs when the probability of the action (i.e., of the potential cause) is high. Indeed, personal involvement might have been often confounded with the probability of acting, as participants who are more involved tend to act more frequently than those for whom the outcome is irrelevant and therefore become mere observers. We tested these two variables separately. In two experiments, the outcome was always uncontrollable and we used a yoked design in which the participants of one condition were actively involved in obtaining it and the participants in the other condition observed the adventitious cause-effect pairs. The results support the latter approach: Those acting more often to obtain the outcome developed stronger illusions, and so did their yoked counterparts.

In her seminal work on the illusion of control, Langer (1975) found that people trying to obtain a desired outcome that occurred independently of their behavior tended to believe that they were controlling it. The experiments conducted by Langer were followed by many studies with a common feature: Even though the participants’ behavior was not the actual cause of the outcomes, participants nevertheless believed that they were controlling the outcomes (e.g., Alloy & Abramson, 1979; Matute, 1995, 1996; Ono, 1987; Rudski, Lischner, & Albert, 1999; Thompson, 1999; Vyse, 1997).

A common index to measure the contingency between two events is the normative ∆p rule (Jenkins & Ward, 1965). It is computed as the difference between the probability that an outcome occurs in the presence and in the absence of the potential cause, p(O|C) and p(O|¬C), respectively. If these two probabilities are equal, the contingency between the two events is zero and there is no causal relationship between them. The illusion of control occurs in these cases.

The traditional approach to the illusion of control has been framed in motivational terms (e.g., Koenig, Clements, & Alloy, 1992; Langer, 1975; Thompson, Armstrong, & Thomas, 1998). From this perspective, people’s judgments of control are influenced by subjective needs related with the maintenance and enhancement of the self-esteem (e.g., Heider, 1958; Kelley, 1973; Weiner, 1979). One of those is the so-called need for control (e.g., Adler, 1930; Kelley, 1973; White, 1959). It has been shown that the sense of having control has benefits for well-being (e.g., Bandura, 1989; Lefcourt, 1973). The perception of uncontrollability has been related to negative consequences at emotional, cognitive, and motivational levels (Overmier & Seligman, 1967; Seligman & Maier, 1967), and even to depression (Abramson, Seligman, & Teasdale, 1978).

Given the importance of actual and perceived control, some researchers have suggested that the illusion of control is a self-serving bias that prevents people from the negative consequences of perceiving the uncontrollability of important events (e.g., Alloy & Abramson, 1979; Alloy, Abramson, & Kossman, 1985; Koenig et al., 1992). As other self-serving biases, the illusion of control is seen as a self-esteem enhancing mechanism that allows people to take credit for successful actions and to deny responsibility for failures (Bradley, 1978; Heider, 1976). In that way, when people acting to obtain a desired outcome face a random sequence of successes and failures, they may tend to view themselves as responsible for successes and attribute failures to other causes such as, for example, chance (e.g., Langer & Roth, 1975). Moreover, some researchers have found a positive relationship between the degree of need for an outcome and the participants’ overconfidence in their own chances to obtain it (Biner, Angle, Park, Mellinger, & Barber, 1995).

From this perspective, overestimating the actual degree of control over an event is only important to the extent that controlling it might pose a challenge to self-esteem. Thus, people do not need to overestimate their control over events that are irrelevant for their self-esteem. The extent to which people are involved in obtaining the outcome or the extent to which the outcome is important for them becomes a crucial factor in this approach (see Thompson, 1999). This factor, that we will call personal involvement, depends on the potential causal role of the participant’s actions, as opposed to external causes (Alloy et al., 1985; Langer, 1975; Langer & Roth, 1975). Following this reasoning, Alloy et al. (1985) also claimed that the illusion of control should be larger in situations in which a person’s behavior is the potential cause because these situations are relevant to self-esteem; cases in which the person’s behavior is not a potential cause are irrelevant and should not produce an illusion.

Evidence for this view comes mainly from studies on the depressive realism effect (Alloy & Abramson, 1979; Alloy, Abramson, & Viscusi, 1981; Alloy et al., 1985; Msetfi, Murphy, & Simpson, 2007; Msetfi, Murphy, Simpson, & Kornbrot, 2005; Presson, & Benassi, 2003). In their seminal work, Alloy and Abramson (1979) found that depressed and nondepressed people differed in their ability to detect the absence of control. Nondepressed participants showed an illusion of control when they judged the control they exerted over uncontrollable outcomes. Depressed participants showed an accurate perception of their absence of control. This has generally been interpreted as a lack of motivation of depressive participants to make use of the self-service mechanism that leads to the illusion of control (or vice versa, a weaker susceptibility to the illusion of control being part of the causal chain leading to depression, see Alloy & Abramson, 1979; Alloy et al., 1985).

A very different approach has emphasized the cognitive aspects of the illusion of control. Within this framework, the illusion of control is seen as a deviation from the accurate judgments of contingency (i.e., those based on ∆p; see, e.g., Allan & Jenkins, 1983) that should be expected when participants learn the relationship between their behavior and uncontrollable outcomes. Research in this field has been interested on how people make use of the information derived from cause-outcome pairings, regardless of whether the cause is the behavior of the person who judges the causal relation or an external event (e.g., Allan & Jenkins, 1983; Blanco, Matute, & Vadillo, 2013; Jenkins & Ward, 1965; Kao & Wasserman, 1993; Shanks, 2007; Wasserman, 1990). From this perspective, the illusion of control has been regarded as a special case of a more general illusion which has been called the illusion of causality (see Matute, Yarritu & Vadillo, 2011). Therefore, the illusion of control is expected to work just like any other causal illusions in which the potential cause is an external event.

When participants act (potential cause) to obtain the outcome, their action can be successful (the outcome occurs) or not (the outcome does not occur). These two situations are represented by cells a and b in the contingency table (see Table 1 ), respectively. Similarly, if the participant does not act to obtain the outcome (i.e., the potential cause is absent), the outcome can occur or not. This is represented in Table 1 by cells c and d. The potential cause in this table does not need to be the participant’s behavior. Despite the many differences among the various theories of contingency judgments that attempt to explain the illusion of control and related effects (see Blanco, Matute, & Vadillo, 2011, 2012), they all agree that decades of research in this area have shown that people do not give the same weight to each cell in the contingency matrix (e.g., Kao & Wasserman, 1993). Cause-outcome coincidences (i.e., cells a) are known to be the pieces of information that have the largest impact on contingency judgments (e.g., Anderson & Sheu, 1995; Kao & Wasserman, 1993, Matute et al., 2011, Smedslund, 1963; White, 2003). Thus, a variety of theories of contingency judgments, which are clearly different from each other (see Shanks, 2007; 2010 for comprehensive reviews of associative, inferential, and other theoretical accounts of contingency judgments), will nevertheless predict that any factor that contributes to a higher number cell a events, with respect to the other cells, should promote higher judgments.

Table 1. Contingency matrix containing the four possible cause-outcome combinations

One such factor is the probability of the outcome, p(O). It is well known that when p(O) is high, people tend to overestimate the relationship between the potential cause and the outcome. This is known as the outcome-density bias and is a key factor in the development of the illusion of causality and of control (Allan & Jenkins, 1983; Alloy & Abramson, 1979; Hannah & Beneteau, 2009; Matute, 1995; Msetfi et al., 2005; Tenenn & Sharp, 1983). In addition to the outcome-density bias there is the cue-density bias which refers to an overestimation of contingency judgments when the probability of the potential cause, p(C), is high (Allan & Jenkins, 1983; Hannah & Beneteau, 2009; Vadillo, Musca, Blanco, & Matute, 2011). While the outcome-density bias has been widely studied, both in situation in which participants are personally involved (i.e., the participants’ behavior is the potential cause, see, e.g., Matute, 1995) and in which they are not (i.e., an external event is the potential cause, see, e.g., Allan, Siegel, & Tangen, 2005), the effect of the probability of the cause (i.e., the action) on the illusion of control has received less attention. However, there is evidence supporting the idea that the more the participants act, the greater their contingency judgments will be (e.g., Blanco, Matute, & Vadillo, 2009; Blanco et al., 2011; Matute, 1996).

It follows from these analyses that even when the outcome is uncontrollable, if p(O) is high, a person who acts frequently to obtain the outcome will experience a high number of cause-outcome coincidences and will almost certainly develop an illusion of control (Blanco et al., 2011; Matute, 1996). Importantly, participants who are personally involved in trying to obtain an outcome tend to act with higher frequency than those for which the outcome is irrelevant, who often become mere observers (at best). Thus, these two variables, personal involvement and action probability, may have often been confounded. We therefore propose that if those two variables are tested separately from each other, it might turn out that it is not personal involvement per se, but probability of action, what produces the illusion.

Importantly, there is evidence suggestive that being the one who performs the action is not even necessary. The effect of p(C) has been demonstrated in situations in which the potential cause is an external event (e.g., Kutzner, Freytag, Vogel, & Fiedler, 2008; Matute et al., 2011; Perales, Catena, Shanks, & González, 2005; Vadillo et al., 2011). For instance, in a recent experiment by Matute et al. (2011), the contingency between a potential cause (i.e., a fictitious medicine administered by a fictitious agent) and an outcome (recovery from illness) was zero, but p(O) was .80. For one group p(C) was .80, for the other was .20. Both groups showed an illusion of causality but the former group gave significantly higher judgments than the second. Thus, being personally involved is not necessary to develop the illusion, and a high action probability is not necessary either. Instead, the high frequency with which the potential cause occurs (assuming that the desired outcome is also frequent but regardless of whom is the agent), predicts when the illusion will occur. Nevertheless, it should be noted that those experiments used scenarios in which all participants were observers and did not compare them to conditions in which participants were acting to obtain the outcome and a true illusion of control could develop. The present research aimed to provide such comparison.

To our knowledge, one of the very few studies that empirically compared the illusion of control under conditions in which the potential cause was the participant’s behavior or an external cause was that of Alloy et al. (1985). Their conclusions were opposite to our expectations. They reported that personal involvement, and not p(C), was the necessary factor in the development of the illusion. However, there are several methodological issues in their study that could explain those results. What Alloy et al. (1985) found was that the illusion of control appeared when participants were asked about the causal relationship between their behavior and an outcome, and not when asked about the predictive relationship between external events. This result need not mean that personal involvement is necessary for the illusion of control to occur. Alternatively, it could be due to the fact that different questions (i.e., causal vs. predictive questions) give rise to differential judgments (Matute, Vegas & De Marez, 2002; Vadillo & Matute, 2007; White, 2003). In addition, the difference observed by Alloy et al. could be due to their using causes in one group and predictors in the other, as causes and predictors have also been shown to produce different judgments (Pineño, Denniston, Beckers, Matute, & Miller, 2005). Moreover, Alloy et al. did not report the participants’ number of actions. In their studies, the number and sequence of actions and no action trials given by the participants who were involved in getting the outcome could have been very different from the number of cue events presented to participants who were observers, and this difference might also explain their differential judgments. The fact that this variable was not reported suggests that it was not considered relevant and might have been confounded. Ideally, and in order to compare the judgments in one case or the other, it is necessary that the cue (whether the participant’s behavior or an external event) occurs with the same frequency and distribution in both cases. Moreover, similar cause and effect events and similar assessment questions should be used in both cases. The present research aimed to provide a fairer comparison between the conditions in which the potential cause is the participants’ behavior and those in which the cause is an external event.

Experiment 1

We used a yoked design. Participants were shown the records of fictitious patients who suffered from a fictitious disease. Each participant in Group Active was free to administrate a fictitious medicine to their patients. Each participant in Group Yoked observed the sequence of actions given by their counterpart participant in the Active Group as well as their consequences. Therefore, the probability and sequence with which the cause occurred was defined by Group Active. For participants in Group Active the potential cause of the outcome was their own behavior; for those in Group Yoked it was an external event.

The yoked design allows us to test the effect of two variables that have often been confounded: Personal involvement (Active vs. Yoked Group) and p(C). It is when these two variables become disentangled, that the predictions of the motivational and the cognitive approaches become clearly different. According to the motivational approach, if the two variables are separated from each other, only personal involvement should affect the judgments of contingency. By contrast, according to the cognitive account it is p(C) that should affect the participants’ judgments, regardless of whether they are actors or observers.

Method

Participants and Apparatus

Ninety-two anonymous volunteers participated in the experiment in exchange for a cafeteria voucher. The sequence of cause-outcome pairings presented to each participant in Group Yoked was derived from the performance of the corresponding active participant. Thus, it was necessary to program the computer differently for each yoked participant. For this reason, the first 10 participants were assigned to Group Active. Participants were then randomly assigned to each condition as they arrived at the laboratory, resulting in a total of 46 participants in Group Active and 46 in Group Yoked. The experiment was run on personal computers located in individual booths.

Procedure and Design

The task was an adaptation of the allergy task, which has been widely used in contingency judgment research. This task has proven to be sensitive to the effect of the illusion of causality both when the potential cause is an external event (e.g., Matute et al., 2011) and when it is the participant’s behavior (Blanco et al., 2011). As in Blanco et al.’s study, we modified the standard procedure so that it would allow for the participants’ actions as potential causes. Participants were prompted to imagine being a medical doctor, who specialized on a rare disease called “Lindsay Syndrome”. They were told about a new medicine (Batatrim) that could cure the crises caused by the disease. Their mission was to find out whether this medicine was effective. There were 100 learning trials (i.e., 100 fictitious patients) before the test phase. In each trial, participants in Group Active were free to act (to administer the medicine to a fictitious patient) and observe the effects. Participants in Group Yoked saw, in each trial, whether the patient was given the medicine (cause) as well as whether the patient recovered (outcome). The probability of the cause for each pair of Active-Yoked participants was thus defined by the number of trials in which the active participant decided to administer the medicine, divided by the total number of trials. The sequence of trials in which the cause (i.e., Batatrim) was present or absent for the participants in Group Yoked was also defined by the sequence of trials in which their counterpart active participant decided to administer the medicine. Neither the active nor the yoked participants were aware of this feature of the design. The occurrence of the outcome (recovery from the crises) was independent from the participants’ behavior and followed a predefined pseudorandom sequence, identical for both groups. Therefore, the resulting sequence of cue-outcome pairings was identical for each Active-Yoked pair of participants. The probability of the outcome was high (.80) because, as described above, this is known to lead to a stronger illusion of control.

After completing all 100 training trials, participants were presented with the following question: To what extent do you think that Batatrim was effective in healing the crises of the patients you have seen? In the illusion of control experiments, participants are usually asked about the extent to which they believe that their behavior was effective in controlling the outcome. Because the potential cause in our experiment was an external event for half of the participants, we substituted the standard controllability wording for the more general “effectiveness” phrasing. This allowed us to present the same question to all participants. The answers were given by clicking on a 0–100 scale, anchored at 0 (definitely NOT) and 100 (definitely YES).

Results and Discussion

The mean p(C) was collected from the actions of active participants, so the value was the same for both Active and Yoked Groups. The mean and the standard error of the mean were 0.59 and 0.03, respectively. We conducted a multiple regression analysis including personal involvement, p(C), the interaction between these two factors and the actual experienced1 contingency as predictors of the judgments. The backward elimination method for the regression analysis was used. This method tests a series of regression models, excluding in each new model the worst predictor of the previously tested model according to a statistical criterion (p ≥ .10). Following this strategy reduces the risk of failing to detect a relationship that actually exists (see Menard, 1995). The results of this analysis can be seen in Table 2 . According to this method, actual experienced contingency, personal involvement, and the Personal Involvement × p(C) interaction were excluded, in that order, as predictors of the participant’s judgments. The final and most parsimonious model contained only p(C).

Table 2. Results of backward elimination regression analysis

In order to further assess the influence of p(C), the sample was then classified as a function of the number of actions given by the Active Group, that is, their p(C). We selected participants who were below or equal to the 33.33 percentile of this variable (Low p(C), a probability below or equal to 0.50) and participants who were over or equal to the 66.66 percentile (High p(C), a probability above or equal to 0.68). The mean judgments for each p(C) condition in each of the two personal involvement conditions can be seen in Figure 1 . A 2 (probability of the cause: High vs. Low) × 2 (Personal Involvement: Active vs. Yoked) analysis of variance (ANOVA) showed a main effect of p(C), F(1, 60) = 14.08, p < .001, ηp2 = .19. All other effects were nonsignificant, largest F(1, 62) = 1.91, ηp2 = .03. Thus, as expected from the cognitive account, it was the frequency with which the cause occurred (be it the participant’s behavior or an external event) that favored a higher or lower illusion, and not the fact that some participants acted and others observed.

Figure 1. Mean judgments given by participants of Experiment 1 in the Active and Yoked groups as a function of p(C), high or low. Error bars denote the standard error of the mean.

Despite the fact that these results clearly suggest that the key factor in the development of the illusion of control is p(C), there are reasons why a definitive claim in favor of the cognitive hypothesis must be taken with caution. First, it could be argued that the active participants in this experiment were not really engaged in the task. Given that participants were prompted to detect the relationship between the medicine and recovery from the crises, and not to obtain the outcome (i.e., recovery), it is possible that their motivation to control the outcome was low. Second, it could be argued that the personal involvement and the probability-of-the-cause factors did not have the same chances to affect participants’ judgments. In this experiment p(C) was a continuous variable derived from the action rate of Group Active, while personal involvement was a dichotomous variable that resulted from the experimental manipulation. The greater number of levels of the cognitive variable, p(C), in comparison to the involvement variable, could have favored the observation of a significant correlation between judgments and p(C). The next experiment addresses these concerns.

Experiment 2

Two are the main modifications that we introduced in Experiment 2 with respect to the involvement factor, and one with respect to the cognitive factor. First, we tried to better motivate active participants by changing the overall goal of the task. In this experiment we explicitly informed all participants that the main goal of the task was to obtain as many outcomes as possible. That is, to heal as many patients as possible. Second, we manipulated personal involvement using the actor-observer procedure commonly used in the self-serving literature. To do so, we used an on-line yoked procedure in which, at the time the active participant was performing the experiment on his or her computer, the yoked participant was observing everything (i.e., both the decisions of the active participant and their outcomes) in a cloned screen. In this case both active and yoked participants were aware of this feature. That is, the relevance for self-esteem of the active participant in this experiment did not come only from their agent role and their motivation to obtain more outcomes but also from being observed.

With respect to the cognitive factor, in Experiment 1 we did not manipulate the probability of acting. Instead we simply measured it. Thus, in order to further clarify the effect of this factor, in Experiment 2 we manipulated the probability with which active participants acted (i.e., administered the medicine). This manipulation featured two levels, high and low, thereby also assuring that both the personal involvement and the cognitive factor (probability of the cause) had the same chances to affect the judgments of participants.

As in the previous experiment, the predictions of the two approaches to the illusion of control are also clearly different from each other in Experiment 2. From the motivational approach, it is expected that the illusion of control will be larger when participants judge the effects of their own behavior (active participants) than when they judge the effects of the behavior of others (yoked participants). From the cognitive approach, there is no reason to expect that differences should emerge as a function of whether the potential cause is the participants’ behavior or somebody else’s behavior. From this perspective, only p(C) is expected to influence the judgments.

Method

Participants and Apparatus

One hundred anonymous volunteers were paid €5 for their participation. They were run in pairs in individual booths. For each pair of participants, one of them was randomly assigned to the active cubicle (clearly labeled “Participant A” on the wall above the screen, and including a mouse in addition to the computer screen). The other one was assigned to the yoked cubicle (labeled “Participant B” and containing only a screen). The two screens were connected to the same computer so that they showed identical information at all times.

Procedure and Design

This experiment used an adaptation of the task used in Experiment 1. To manipulate the personal involvement and the probability of the cause in a more comparable manner, the experiment used a 2 × 2 factorial design. Participants of the two involvement conditions were exposed to exactly the same contingency information and were both told that the goal of the active participants was to heal as many (fictitious) patients as possible. The instructions that they received were also identical, with the following paragraph stating what each of them should do: “If you are participant “A” you will have to decide whether or not to administrate Batatrim to each patient. If you are participant “B” you will have to observe those decisions and their consequences.

The probability of the cause was also manipulated in two levels. Participants in the High condition had a maximum of seven doses of Batatrim for every 10 patients (trials). Participants in the Low condition had a maximum of three doses for every 10 patients. Participants were told that every 10 patients they would get a new supply of seven (or three) doses. They were also requested to use them all. Thus, some participants were asked to respond in 30% of the trials (Low p(C) Group) while others were requested to respond in 70% of trials (High p(C) Group).

As in the previous experiment, the probability of the outcome (recovery) was high (.80) regardless of whether or not the cause was presented. The outcome was presented in a predefined pseudorandom sequence. Once the training phase was finished participants gave their effectiveness judgment. The test question was the same as in Experiment 1 but was administered using paper and pencil because each pair of participants shared the same computer. Once participants wrote their judgment, they received a second sheet of paper with the following question, aimed to assess whether the involvement manipulation had been effective: To what extent did you feel involved in the healing of the patients? The answers for both questions were given using a 0–100 scale, anchored at 0 (definitely NOT) and 100 (definitely YES).

Results and Discussion

As the active participants were free to administer Batatrim in each trial (always within the limits of the number of doses imposed by the experimental manipulation), that is, some of them could choose not to act, we first needed to ensure that their action rates coincided with those planned for each condition. To do so, we imposed a selection criterion of action rate that must be satisfied by each active participant in order to include his/her data (and that of the corresponding yoked participant) in the analyses. This criterion is that all active participants must give at least 95% of all possible actions. In the Low p(C) Group the limit is 30 doses and they were asked to use them all. Therefore, if the active participant in this condition administrates the medicine in less than 27 trials (95% of 30), the data of this pair of participants is removed from subsequent analysis. For the High p(C) condition, the criterion is that the active participant must administrate the medicine in 63 trials or more (95% of 70). These criteria were satisfied by 39 of the 50 pairs of participants (a total of 78 participants). Of these 78 participants, 40 (20 active and 20 yoked) were in the Low p(C) condition and 38 (19 active and 19 yoked) were in the High p(C) condition.2

We next conducted an analysis of the answers to the question that we added at the end of the experiment to check whether the involvement manipulation had been effective. Means (and standard errors of the means) in this question for the active and yoked participants were 67.38 (4.02) and 46.36 (4.21), respectively. A 2 (Probability of the Cause) × 2 (Personal Involvement) ANOVA found that, as expected, the degree of personal involvement which the participants felt toward the task was higher for the active participants than for the yoked participants, F(1, 74) = 10.75, p < .005, ηp2 = .13. Also as expected, the main effect of p(C) and the interaction were nonsignificant, largest F(1, 74) = 0.94, ηp2 = .01. That is, the involvement manipulation worked as planned.

The critical results are the mean judgments of effectiveness for each condition. These are shown in Figure 2 . The figure suggests that judgments did not differ between active and yoked participants. Judgments were higher in the High than in the Low p(C) condition. A 2 (Probability of the Cause) × 2 (Personal Involvement) ANOVA confirmed these findings. As expected, a significant main effect of p(C) was found, F(1, 74) = 16.41, p < .001, ηp2 = .18, and no main effect of personal involvement nor an interaction was observed, largest F(1, 74) = 0.47, ηp2 = .01. Therefore, and consistent with our hypothesis, participants’ judgments of contingency were affected by p(C) and not by personal involvement.

Figure 2. Mean judgments given by Active and Yoked groups of Experiment 2 in each p(C) group, high or low. Error bars denote the standard error of the mean.

The results of this experiment are congruent with those of Experiment 1. Moreover, in this case it is difficult to question the validity of the personal involvement manipulation. As shown by the manipulation check, the experimental manipulation affected the extent to which participants felt motivated toward the task. Importantly, this difference between active and yoked participants did not affect their judgments, which were only affected by p(C). This finding leads us to suspect that previous results that have been attributed to personal involvement may not always be due to a direct effect of motivational factors on contingency estimation. Instead, the present results suggest that the apparent effect of personal involvement on judgments might be due to the higher probability of action of participants who are more personally involved.

General Discussion

The results of the two experiments presented here provide little support for the motivational approach. From this approach it is argued that people must be personally involved in trying to obtain the outcome, and their self-esteem at risk, for the illusion to occur (Alloy et al., 1985; Thompson, 1999; Thompson et al., 1998). This claim lies on the idea that the illusion of control is a self-serving bias that activates when the relationship judged is relevant to self-esteem (e.g., Alloy & Abramson, 1979; Dudley, 1999; Koenig et al., 1992). However, we did not find an effect of personal involvement when it was tested independently of p(C). Participants of the Yoked Group showed the illusion of control even though their judgments were not relevant to protect their self-esteem. Moreover, we found a strong effect of p(C). As we have noted earlier, this p(C) effect could explain the results that had been often attributed to personal involvement in previous research, given that participants who are more involved tend to perform more actions to obtain the outcome.

Alloy et al. (1985) had previously reported an investigation in which, as in the present one, personal involvement and p(C) had been separated from each other. They reported that participants who judged the predictive value of an external event did not show a significant overestimation of contingency, while participants who judged the capacity of their own behavior to control the outcome did so. Alloy et al. concluded that people overestimate contingency only when they are judging their own behavior because only this is relevant for self-protection. The present results do not support their conclusions. Instead, the differences observed by Alloy and her colleagues could be due, as mentioned in the Introduction, to the different assessment question that they used in each case (Matute et al., 2002; Vadillo & Matute, 2007; White, 2003), or to the fact that they used causes in one group and predictors in the other (see Pineño et al., 2005, for differences between them). In addition, Alloy et al. did not report the number of attempts (i.e., actions) performed by participants in the active condition, nor the value of p(C) presented to passive participants. The influence of this factor has proven to be significant in the present research and personal involvement has not. As our results show, when p(C) was high, the illusion was high as well. This is in line with previous studies in which the influence of p(C) was tested. Indeed, this p(C) effect is often described more generally as the probability of the cue effect, or the cue-density effect, as it occurs with either causes or predictors as cue events; see, e.g., Blanco et al., 2011, 2013; Hannah & Beneteau, 2009; Matute, 1996; Matute et al., 2011; Perales et al., 2005; Vadillo et al., 2011).

As noted in the Introduction, another factor that is known to favor the illusion of control is p(O). Thus, we used a situation in which this probability was always high. Given that p(O) is high in cases in which the illusion occurs, the effect of p(C) appears to be due to the fact that a high p(C) makes it very likely that the cause and the outcome coincide in many trials (see Blanco et al., 2011, 2013). Moreover, it is well known that these cause-effect coincidences tend to have more weight on the perception of causal relations than trials in which only the cause or the outcome occurs (e.g., Kao & Wasserman, 1993). As noted in the Introduction, this result is predicted by many different theories of contingency judgments (see Blanco et al. 2011, 2012).

The main contribution of the present experiments is that the effects of personal involvement and probability of the cause are tested independently of each other. Even though the predictions of the motivational and the cognitive approaches can often be identical (because increased motivation produces more active behavior), when these two variables are tested separately, the predictions of the two approaches become clearly different. The motivational approach predicts, for these cases, that only those who act to obtain the outcome should develop the illusion. The cognitive approach predicts that only p(C) should influence the illusion. In our experiments, the judgments of participants who were involved in obtaining the outcome can be directly contrasted to the judgments of those who simply observed the identical events. Under these conditions, the results showed that the probability of the potential cause was the only variable that clearly influenced the participants’ judgments.

Although our results suggest that personal involvement has no influence on the illusion of control, we must acknowledge that our conclusions are based on the absence of significant differences with respect to this variable. It is possible that our participants were not sufficiently engaged in the task, so that their performance was actually irrelevant for self-esteem. Nevertheless, in the absence of more convincing evidence about the role of personal involvement in the illusion of control, it seems more parsimonious to assume that a single process (biased contingency detection due to a high probability of the cause) is responsible for the illusions previously attributed to personal involvement (Alloy et al., 1985). Indeed, Matute, Vadillo, Blanco, and Musca (2007) have shown that even an artificial learning system using a very simple and popular learning algorithm such as the Rescorla and Wagner (1972) model will develop these illusions when the outcome occurs frequently and the system acts frequently. On the other hand, although the influence of self-protection may not be ruled out in all cases in which people develop illusions of control, what our results show is that this influence is not necessary to account for all instances of the illusion of control reported in the literature. In any situation in which personal involvement may translate into more active behavior, psychologists need to be aware that the increase in p(C), rather than a need to protect self-esteem, may be producing the illusion.

In closing, it is important to note that even though the motivational approach is normally presented as an explanation of the illusion of control, it does not really provide such explanation. That is, it predicts that the illusion will be stronger when people are more personally involved, but does not attempt to explain how the illusion takes place (see Matute & Vadillo, 2012, for discussion). If this is acknowledged, our proposal becomes perfectly compatible with the motivational framework. The p(C) explanation we have advanced aims to provide just such underlying mechanism.

1Because participants in the Active Group are free to act in each trial and the occurrence of the outcome event is predefined in a pseudo-random sequence, there is some degree of variance in the contingency to which participants are actually exposed, but previous research has shown that this variance does not influence participants’ judgments (Blanco, Matute, & Vadillo, 2011). Nevertheless, and despite this variance being identical for both groups in the present research, we preferred to include this variable in the regression analysis.

2We also conducted an alternative analysis with the complete sample, including those participants who did not comply with the data selection criterion. The results of this alternative analysis do not differ from the analysis presented here.

References

Support for this research was provided by Grant No. PSI2011-26965 from Dirección General de Investigación of the Spanish Government and Grant No. IT363-10 from the Basque Government. Ion Yarritu was supported by fellowship BES-2008-009097 from the Spanish Government. We would like to thank Fernando Blanco, Pablo Garaizar, Cristina Orgaz, Nerea Ortega-Castro, and Sara Steegen for illuminating discussions.

Helena Matute, Departamento de Fundamentos y Métodos de la Psicología, Universidad de Deusto, Apartado 1, 48080 Bilbao, Spain