Skip to main content
Free AccessPre-Registered Report

“Drive the Lane; Together, Hard!”

An Examination of the Effects of Supportive Coplaying and Task Difficulty on Prosocial Behavior

Published Online:https://doi.org/10.1027/1864-1105/a000209

Abstract

Abstract. As an entertainment technology, video games are a popular social activity that can allow for multiple players to cooperatively engage on-screen challenges. Emerging research has found that when people play together, the resulting teamwork can have beneficial impacts on their prosocial orientations after gameplay – especially when the players are cooperative with one another. The present study wanted to expand the scope of these beneficial interpersonal effects by considering both inter- and intrapersonal factors. In an experimental study (N = 115) we manipulated the difficulty of a game (easy or hard) and the behavior of a confederate teammate (supportive or unsupportive playing style). We found that neither coplayer supportiveness nor game difficulty had an effect on the expectations of a teammate’s prosocial behavior or one’s own prosocial behavior toward the teammate after the game (operationalized as willingness to share small amounts of money with one’s teammate after playing). Increased expectations of prosocial behavior from one’s teammate were related to one’s own prosocial behaviors, independent of our manipulations. Considering these results, we propose alternative theoretical approaches to understanding complex social interactions in video games. Furthermore, we suggest to explore other types of manipulations of game difficulty and cooperation between video game players as well as alternative measures of prosocial behavior.

Although often studied when played in isolation, video games can be a surprisingly social entertainment technology. The earliest games such as SpaceWar! (1962) required two players, and most home consoles from the Atari 2600 (1977) to the Microsoft XBox One (2013) have at least two controller ports. Advances in computing technology, such as the spread of high-speed and wireless Internet access, have further increased the opportunities for playing with others, co-located in the sense of playing with the same device/in front of the same screen or otherwise. Research found that the prosocial effects of cooperative video game play are determined by in-game behaviors that provide or withhold expected helpful behaviors from teammates (Velez, 2015). However, little is known about how different modes of social video game play can change expectations of teammates’ in-game behaviors and how this may influence subsequent expectations of reciprocity and prosocial behaviors. The current study provides an extension of previous research examining the benefits of supportive versus unsupportive teammates as applied from the perspective of bounded generalized reciprocity (Velez, 2015). Additionally, game difficulty is manipulated to examine if greater need for helpful in-game behaviors from teammates (i.e., hard difficulty settings) or lower need (i.e., easy difficulty settings) influence how teammates who satisfy or deny such expectations affect subsequent prosocial behaviors.

Interpersonal Processes and Video Game Play

A growing body of research suggests that video game play, when engaged as a shared social activity, is more about the act of playing together than the content being played (e.g., Elson & Breuer, 2013), which requires a shift of theoretical frameworks to examine their interpersonal effects. One such framework is the theory of bounded generalized reciprocity (BGR; Yamagishi, Jin, & Kiyonari 1999), proposed to explain people’s prosocial behaviors in the most basic inter- and intragroup interactions: minimal groups (i.e., groups formed by arbitrarily assigning strangers to group membership). BGR suggests that people rely on a set of instinctual expectations of others’ prosocial behaviors (i.e., the group heuristic) in order to maximize personal gain. People expect ingroup members to reciprocate prosocial behaviors that deems them safe for prosocial interactions, whereas providing prosocial behaviors to outgroup members is considered risky because of the expected low chance of reciprocation, even if individuals have never previously interacted (Yamagishi et al., 1999).

Prosocial behaviors are proposed to be generalized to all ingroup members, such that prosocial behaviors are expected to be provided and reciprocated whether or not two ingroup members have previously interacted. Ingroup members who adhere to these expectations are rewarded, whereas those who disregard expectations are punished by excluding them from further benefits (Yamagishi et al., 1999). Research suggests teammates’ behaviors during cooperative video game play are similarly rewarded or punished depending on whether teammates adhere or disregard expectations according to the group heuristic. For example, teammates who confirmed expectations by providing and reciprocating helpful behaviors during video game play received the most prosocial behaviors, whereas teammates who defied expectations and provided no helpful behaviors received the lowest amount of prosocial behaviors; even lower than minimal groups teammates who did not play a video game together (Velez, 2015). Additionally, the same study found that participants’ prosocial behaviors toward teammates after video game play were mediated by expectations of teammates to provide their own subsequent prosocial behaviors as suggested by BGR. That is, supportive teammates indicated their participation in the group heuristic and, thus, allowed participants to fulfill expectations of their own behaviors (e.g., to behave prosocially toward a teammate) with the assurance of collecting prosocial behaviors from their teammate. The current study aims to conceptually replicate and extend this previous research, which leads to our first set of hypotheses:

Hypothesis 1 (H1):

People who play with a supportive teammate will be more likely to expect that teammate to provide subsequent prosocial behaviors than those who play with an unsupportive teammate.

Hypothesis 2 (H2):

People who play with a supportive teammate will behave more prosocially toward that teammate than those who play with an unsupportive teammate.

Task Difficulty in Social Video Game Play

The research discussed in the previous section suggests helpful teammates confirm reciprocity expectations and unhelpful teammates defy them, but researchers need to also take into account other dimensions of social video game play that may increase or decrease what is expected of teammates during (and after) game play. For instance, the challenges faced by teams likely determine how much is expected of teammates, such that easy challenges result in fewer expectations of teammates, while hard challenges increase expectations due to the elevated need for teamwork. In addition, previous research has shown that success in a game is an important factor that also influences subsequent social interactions. For example, a study by Breuer, Scharkow, and Quandt (2015) found that losing in a competitive game increases negative emotions which, in turn, also increases the tendency to (re-)act aggressively to the opponent in subsequent interactions. Following this line of reasoning one could expect more prosocial behavior to be shown after playing an easy game. Other research suggests that increases in difficulty can monopolize players’ attention and redirect players’ focus from social influences to the effort necessary to undertake the increase in challenge (Bowman, Weber, Tamborini, & Sherry, 2013). However, it is plausible that cooperatively taking on a difficult challenge might be a more powerful bonding activity than tackling an easy challenge. As both theoretical reasoning and the findings from previous studies suggest that the effect of game difficulty on prosocial behavior and expectations about the prosocial behavior of a teammate could be positive or negative, we chose to formulate a set of competing hypotheses:

Hypothesis 3a (H3a):

People who play a more difficult game will be more likely to expect their teammate to provide subsequent prosocial behaviors than those who play an easy game.

Hypothesis 3b (H3b):

People who play a more difficult game will be less likely to expect their teammate to provide subsequent prosocial behaviors than those who play an easy game.

Hypothesis 4a (H4a):

People who play a more difficult game will behave more prosocially toward their teammate than those who play an easy game.

Hypothesis 4b (H4b):

People who play a more difficult game will behave less prosocially toward their teammate than those who play an easy game.

Given our expectations for independent main effects of cooperation and game difficulty on prosocial behaviors (both in terms of the player and the player’s expectations of their teammate), it seems appropriate to consider the potential interaction of these main effects. In the case of our study, it makes sense that manipulations of teammate supportiveness might alter perceptions of the game’s difficulty, and, conversely, it is sensible to assume that a more difficult game might impact perceptions of a teammate’s helpfulness, due to frustration gestating from the increased game challenge that might be misattributed (either explicitly or implicitly) to one’s teammate. Thus, we proposed a research question regarding the (potential) interaction between supportiveness and difficulty and keep this open for exploration.

Research Question 1 (RQ1):

How will the effect of teammate supportiveness and game difficulty on expectations and prosocial behavior interact?

Finally, as the core of BGR is the expectation of reciprocity, we also assumed that expectations about the behavior of the teammate would also affect one’s own prosocial behavior:

Hypothesis 5 (H5):

The expectation of prosocial behaviors from a teammate will predict prosocial behavior toward the teammate.

Method

We employed a 2 (teammate style: supportive vs. unsupportive) × 2 (game difficulty: hard vs. easy) between-subjects factorial design to examine the impact of both independent variables (and their potential interaction) on players’ subsequent expectations of reciprocity and prosocial behaviors toward teammates.

Participants

We conducted an a priori power analysis with G*Power (version 3.1.9; Faul, Erdfelder, Lang, & Buchner, 2007) to determine an optimal sample size. We consulted previous work in this area to choose a realistic effect size. As there were no previous studies that investigated the effect of game difficulty on prosocial behavior, we had to restrict our a priori power analysis to the player interaction variable (i.e., supportiveness). Using an effect size estimate of f = 0.25 (Cohen’s d = 0.5) we arrived at a suggested sample size of N = 128 for our 2 × 2 ANCOVA with one covariate to test Hypotheses 2, 4a, and 4b. As we expected that roughly 15% of the participants would have to be removed based on our exclusion criteria (see data analysis section), our targeted gross sample size was N = 148.1

Participants were recruited via postings in student groups on social media, university mailing lists, and leaflets distributed among students of the institution of the first author as well as at a university of applied sciences in the same city and a local eSports bar. Psychology students were offered course credit2 for their participation. Psychology students who did not want course credit and participants who did not study psychology were entered into a drawing for one of 12 €25 Amazon.de gift certificates. A total of 122 individuals participated in the study (68 female, 54 male). The average age of the sample was 23.83 (SD = 4.68).3

Procedure

All interested participants were directed to a scheduling website via a URL in the e-mail/posting/leaflet where they could choose one 45-min laboratory session. These sessions were then randomly assigned to one of four experimental conditions.

The laboratory set-up for the study included a Nintendo Wii console connected to a standard television screen and two conformable theater chairs to create a living room style atmosphere. Upon arrival to the laboratory, the participants read and signed an informed consent document, before they were introduced to the video game through a controller information sheet and given a 3-min training session to familiarize themselves with the game. To manipulate supportiveness, we used confederates who were trained in playing the game prior to data collection. In order to avoid suspicions, the confederate was also presented with the controller information sheet and given time to “become familiar with the controls” (to reinforce their guise as a naive participant in the study).

After the practice session, the game was reset and both participant and confederate played a full game of 12 min (four 3-min quarters). Following gameplay, the confederate was taken to an adjacent room while the participant remained in the same room to complete an online questionnaire. At the end of the session, the participants were thanked and debriefed and received the €1 from the sharing task. The study was run by four experimenters (all female).

Stimulus Material

Video Game

Participants played NBA Jam: On Fire Edition (released in 2011 by EA Sports). The game is well situated for studying colocated social game play with two players as it (a) features teams of two cooperative players on screen at once, (b) gameplay is simplified to require four total buttons for gameplay, and (c) the game features a version of a popular global sport (basketball) but presented in a simplified fashion to focus on two core play mechanics: scoring points and preventing the other team from scoring.

Supportive Teammate Manipulation

Participants were assigned to play with one of four trained male confederates, posing as naive participants and trained to be helpful or not according to a script adapted from Velez (2015). Confederates who were instructed to be supportive began the game by stating: “Let’s use some teamwork,” and at three time points attempted to engage in a cooperative move (an alley-oop, where one player passes the ball to another player who, while jumping, catches the ball and slams it into the basketball hoop) with the participant that requires both players’ participation.4 Supportive confederates were instructed to pass the ball to participants as much as possible throughout the game. Confederates who were instructed to be unsupportive used neutral statements at the beginning (“Looks like we are on the same team”) and at three points throughout the game (“We are playing for 12 minutes, it looks like we have some more time to play.”) in order to ensure all conditions had equal amounts of verbal statements from the confederate. Unsupportive confederates were instructed to never pass the ball to the participant. The detailed script that was given to the confederates (in German) as well as its English translation can be found in our Open Science Framework (OSF) project for this study (see https://osf.io/bsd97).

Game Difficulty Manipulation

To alter the difficulty of NBA Jam, players were assigned to play in either the “easy” or “hard” mode in the game’s options menu. To ensure that the participants were unaware of this manipulation, the difficulty was chosen by the experimenter before the participant and the confederate entered the laboratory.

Measures

All of the following measures were presented in the online questionnaire administered after the main playing session. The order of the measures in the questionnaire was as follows: (a) demographic information, (b) manipulation checks, (c) reciprocity expectations, and (d) prosocial behavior (sharing). Measurements were written in English and translated into German by one of the experimenters. Back-translations were performed by the authors to ensure the face validity of the items. All measures (in both English and German) are available via our shared OSF project link.

Reciprocity Expectations

Replicating past work (Greitemeyer, Traut-Mattausch, & Osswald, 2012; Velez, 2015), participants were told that they would engage in a money transaction game with their teammate. Specifically, they were told that both they and their teammate have ten 10-cent coins (€1 in total) and that they can donate any number of those to their teammate and/or keep as many as they like. They were also told that any number of coins they donate would double in value for their teammate, but any coins they keep will not (and that the same was, of course, true for their teammate). After reading this instruction they were first asked: “Out of the ten 10-cent coins possible to donate, how many do you think your teammate will choose to donate to you?’’ The response options ranged from 0 to 10.

Prosocial Behavior (Sharing)

To assess prosocial (sharing) behavior, participants were asked to indicate how many coins they would like to give to their teammate (from 0 to 10). For practical (there was no real interaction) and ethical reasons, all participants received the full €1 as payment when they were debriefed at the end of the study.

Manipulation Checks

To ensure that our manipulation of game difficulty was successful, we used both objective and subjective indicators of difficulty and performance. For objective performance, we noted the final score as well as the points scored by the participant and the confederate. To assess the players’ subjective experience of success, we asked them to indicate how difficult the game was and how successful they felt after playing the game. As manipulation checks for the supportiveness of the teammate (confederate), we also used objective (number of assists and alley-oops) and subjective indicators (rating the confederate teammate in terms of sympathy, supportiveness, and competence, and indicating how much support they expected from their teammate). Response options for self-report items ranged from 1 = strongly disagree to 7 = strongly agree.

Control Measures

As previous research has shown that the outcome of a video game (i.e., the success or score) can influence subsequent social interactions (Breuer et al., 2015), we wanted to use the difference between points scored by the team controlled by the participant and the confederate and the computer-controlled team as a covariate in our analyses.

Additional Measures

For the purpose of describing the sample, we asked participants to indicate their age, gender, and for how many hours they play video games in an average week. We also asked participants whether they had known the other player (confederate) before the study. If they indicated that they did, we asked them how well they know them on a scale ranging from 1 = barely (e.g., “You saw her/him in a lecture or on the bus without really talking to her/him”) to 5 = very well (e.g., “You are close friends, are/have been roommates”). At the end of the questionnaire, participants were asked in two open-ended questions if they noticed anything particular during the study or if they had any comments. These items were used to identify participants who guessed the true purpose of the study or noticed that they played against a confederate.

Data Analysis

Following the exclusion criteria defined in the preregistration document for this study, six participants were excluded because they guessed the true purpose of the study or indicated that they knew their teammate was a confederate in the open comments section of the questionnaire or in a verbal statement to the experimenter. Another participant was excluded because s/he received zero assists and completed zero alley-oops in the supportive condition. None of the participants indicated that they knew the confederate before the study. We chose not to use the fourth exclusion criterion described in the preregistration document (i.e., exclude participants who gave the confederate a supportiveness rating of 7 in the unsupportive condition or a rating of 1 in the supportive condition). Using this criterion would have led to the removal of 17 participants (in addition to the seven excluded based on the other three criteria). We discussed the potential reasons for this surprisingly high number, and the amount of high supportiveness ratings in the unsupportive condition (n = 15) supported our assumption that this is likely due to an issue of wording or terminology. Participants were asked whether their teammate was supportive. As the confederates were skilled players and instructed to play as well as possible, the participants might have perceived the “egoistic” performance of the confederate in the unsupportive condition as “supportive” (or helpful) for being successful in the game.5 Applying the exclusion criteria 1 to 3 defined in the preregistration document resulted in a sample of N = 115 (65 female, 50 male) with an average age of 23.77 years (SD = 4.68) and an average amount of weekly video game playing of 4.23 hr (SD = 7.23).6 Of the participants of the net sample, 59 were in the supportive (31 easy difficulty, 28 hard difficulty) and 56 in the unsupportive conditions (28 easy, 28 hard).7

In conducting tests of our a priori hypotheses and to probe the research question presented, our observed data were analyzed using two approaches, in parallel: (a) null hypothesis significance tests (NHST) that are derived from a frequentist interpretation of probability and directly test for rejection of (or failure to reject) a null hypothesis, and (b) Bayesian hypothesis testing that tests the probability of the observed data under different hypotheses (including the null). Providing an overview of both analyses and their relative affordances and constraints is beyond the scope of this paper, but a critical difference between the two approaches is that NHST does not conceptually allow for direct tests of the proposed alternative hypothesis (it only allows us to reject or fail to reject the null), whereas Bayesian hypothesis testing does (as it tests for competing likelihoods of both the predicted and the null hypothesis).

While our original plan was to include the final score of the game as a covariate (see “Analysis Plan” in the OSF preregistration document), we refrained from doing so because we found in an ANOVA with the experimental manipulations as independent and the score as dependent variables that, despite the extensive training for the confederates and the instruction to always play at their best and try to win, the final score was heavily influenced by the difficulty condition, F(1, 111) = 125.8, p < .001, ω² = 0.52, BF10 > 1,000.8,9 In the easy condition the average score (i.e., the difference between points scored by the team controlled by participant and confederate and the AI-controlled team) was in favor of the human players (M = 8.76, SD = 10.15), whereas the opposite was true for the hard condition (M = −12.64, SD = 10.44). While we expected the absolute value of the scores to differ significantly between the easy and difficult conditions, we had hoped that the values would be positive in both conditions – that is, that players would always defeat their opponent, but the magnitude of that victory would be diminished in the difficult condition (a scenario that would have provided a more direct conceptual replication of Bowman et al., 2013). Hence, we tested our Hypotheses 1–4b in two separate ANOVAs and we did not consider score magnitude as covariate as it was naturally confounded with game difficulty.

Hypothesis 5 was tested in a bivariate regression with reciprocity expectation as the predictor and sharing as the dependent variable. All manipulation checks were done in a series of independent-samples t tests. Data preparation and descriptive analyses were done with SPSS 22.0, while all inferential tests (both frequentist and Bayesian10) were conducted using JASP version 0.8.0.0 (JASP Team, 2016). The SPSS datasets and syntax as well as the complete JASP project (including the data, the analyses, and the results) are available in our OSF project.

Results

Confirmatory Analyses

Manipulation Checks

As stated in the measures section, we used both objective (based on the performance in the game) and subjective (based on self-report) indicators for our manipulation checks. Because the Shapiro–Wilk test suggested deviations from normality for almost all of the manipulation check variables, we used the non-parametric Mann–Whitney U test for the frequentist analyses. The results from the difficulty manipulation checks are shown in Table 1. As expected, participants in the hard condition rated the difficulty higher than those in the easy condition (Cohen’s d = 0.54, BF10 = 8.3).11 According to the suggestions of Cohen (1988), this constitutes a medium effect and the Bayes factor provides moderate evidence (Lee & Wagenmakers, 2013).12 There was also a large difference in the final score between the easy and hard condition (Cohen’s d = 2.08, BF10 > 1,000). While the differences for self-rated success and points scored by the participant were in the expected direction and the impact of difficulty can be interpreted as a small effect sensu Cohen (1988), these were not significant and the Bayes factors were indecisive (slightly in favor of the null hypothesis). That difficulty had almost no impact on the number of points scored by the confederates indicates that the training they received prior to the study seems to have been effective.

Table 1 Manipulation checks for difficulty

Table 2 shows the results of the independent-samples t tests for our supportiveness manipulation checks. Participants in the supportive conditions reported that they received more support, gave the confederate higher sympathy ratings13, received more assists from the confederate, and scored more alley-oops in the game. Notably, while the latter two were experimentally controlled behaviors (and, hence, the large effect sizes are to be expected) the former two are perception measures, and provide strong evidence that our manipulations worked in that the conditions differed from each other in these respects (especially the received support metric, Cohen’s d = 0.68, BF10 = 64.3).14 As we had hoped for, the ratings of the confederates in terms of competence and the expectations about the supportiveness of the confederate did not differ between the unsupportive and supportive conditions.

Table 2 Manipulation checks for supportiveness

Reciprocity Expectations

To test the impact of our experimental manipulations on reciprocity expectations, we calculated an ANOVA with supportiveness and difficulty as independent and the expectations about how many coins the teammate will share as the dependent variable. The number of coins participants expected their teammate to share did not differ between the easy (M = 6.20, SD = 2.88) and hard (M = 6.36, SD = 2.86) or the unsupportive (M = 6.20, SD = 3.12) and supportive (M = 6.36, SD = 2.62) conditions (see Table 3 for test statistics, including effect sizes and Bayes factors).15 For Hypothesis 1 the BF01 for supportiveness indicates that the data were 4.85 times more likely under the null hypothesis than under the alternative hypothesis. In the case of our competing Hypotheses 3a and 3b, the BF01 for difficulty suggests that the data were 4.86 times more likely under the null hypothesis than under the alternative hypothesis. According to the verbal categories proposed by Lee and Wagenmakers (2013), this is moderate evidence for the null hypotheses.

Table 3 Results of the ANOVA for expectations about how much the teammate will share

Prosocial Behavior

The effect of supportiveness and difficulty on prosocial behavior (sharing) was tested in an ANOVA with the experimental conditions as independent and the number of 10-cent coins shared as the dependent variable. The results of this ANOVA are displayed in Table 4. Contrary to our expectations, there was no effect of coplayer supportiveness on prosocial behavior. The number of coins shared neither differed between the unsupportive (M = 7.5, SD = 2.89) and supportive conditions (M = 7.63, SD = 2.46) nor between the easy (M = 7.51, SD = 2.59) and hard conditions (M = 7.62, SD = 2.77).16 With regard to our Hypothesis 2, the BF01 for supportiveness indicates that the data were 4.9 times more likely under the null hypothesis than under the alternative hypothesis. For Hypotheses 4a and 4b, the BF01 for difficulty suggests that the data were 4.92 times more likely under the null hypothesis than under the alternative hypothesis. Again, this is moderate evidence for the null hypotheses according to Lee and Wagenmakers (2013).

Table 4 Results of the ANOVA for prosocial behavior (sharing)

In order to test our fifth hypothesis in which we assumed that the expectation of prosocial behaviors from a teammate will predict prosocial behavior toward the teammate, we used bivariate linear regression. As can be seen in Table 5, expectations about how much the teammate (confederate) would share strongly predicted the participants’ own prosocial behavior. Accordingly, our data strongly support Hypothesis 5.17

Table 5 Linear regression with expectation as predictor and prosocial behavior as dependent variable

Exploratory Analyses

As Breuer et al. (2015) have found that the outcome of a competitive video game can affect aggressive (i.e., antisocial) behavior toward the opponent, we investigated in additional exploratory analyses whether the result of the game also affected reciprocity expectations and prosocial behavior in our study. A Mann–Whitney U test revealed that the number of coins shared did not differ between games that were won (n = 53, M = 7.43, Mdn = 8, SD = 2.69) and games that were lost (n = 62, M = 7.68, Mdn = 9, SD = 2.67), U = 1740.5, p = .561, d = 0.09, BF10 = 0.22. Similarly, the participants’ expectations about how many coins their teammate would share did also not differ between games that were won (M = 6.25, Mdn = 5, SD = 2.89) and games that were lost (M = 6.31, Mdn = 5, SD = 2.86), U = 1663, p = .910, d = 0.021, BF10 = 0.2.18 This is further corroborated by the fact that the score did not predict the prosocial behavior (β = −.08, p = .394, BF10 = 0.28)19 or the expectations of the participant (β = −.02, p = .809, BF10 = 0.2).

Since we did not fully meet the number of 128 participants (after exclusion) suggested by our a priori power analysis (see preregistration document) we used G*Power (version 3.1.9; Faul et al., 2007) to calculate the power we had with our net sample size to detect an effect of d = .5 (i.e., the effect size we used for our a priori power calculations based on the literature) in t tests for the main effects. This analysis indicated that with our final net sample of N = 115 we had a power of 0.76 to detect a main effect of our experimental manipulations in the magnitude of d = .5.

Discussion

Previous research found that cooperative video game play can have prosocial effects for players (Adachi, Hodson, Willoughby, & Zanette, 2015; Ewoldsen, Eno, Okdie, Velez, Guadagno, & DeCoster, 2012; Greitemeyer & Cox, 2013; Velez, 2015; Velez, Mahood, Ewoldsen, & Moyer-Guse, 2014; Waddell & Peng, 2014) and bounded generalized reciprocity theory suggests people naturally form expectations of prosocial reciprocity from ingroup members in minimal group settings (i.e., arbitrary group formation between strangers; Yamagishi et al., 1999). Recent research has explored how naturally formed ingroup reciprocity expectations (i.e., the group heuristic) are influenced when playing a video game with others and the subsequent effect of these changes on prosocial behaviors. Specifically, research suggests that, compared with minimal groups (i.e., strangers arbitrarily assigned to groups and did not play a video game), video game dyads with a helpful teammate confirmed players’ ingroup reciprocity expectations, while dyads with an unhelpful teammate disconfirmed expectations, which then led to increases or decreases in prosocial behaviors, respectively (Velez, 2015). The present study tried to extend this work by examining how supportive and unsupportive teammate behaviors can influence players’ ingroup reciprocity expectations and their resulting prosocial behaviors when playing under hard or easy game difficulty settings. While our study supported the BGR assertion that one’s own prosocial behaviors are largely determined by expectations of others’ reciprocity, our data do not lend support to the hypothesis that these expectations (and, consequently, one’s own prosocial behavior) are affected by the degree of supportiveness shown by a teammate in hard or easy cooperative video game play.

As the current study suggests, supportive or unsupportive behaviors in hard or easy difficulty settings may not confirm or disconfirm prior reciprocity expectations as they do under intermediate settings (Velez, 2015). There are several possible reasons why (un)supportive behaviors in easy and hard difficulty settings did not effectively convey reciprocity expectation information. For instance, when examining difficulty settings at the ends of the spectrum (e.g., hard and easy settings) it is possible that the increased focus on scoring points (i.e., unsupportive behaviors) may be perceived positively under easy difficulty settings, considering cooperative behaviors are not needed to score (i.e., at least to a lesser degree than under hard difficulty), and, thus, scoring might have substituted as a supportive behavior. However, it is also possible that the more difficult game might have caused participants to focus more on their own performance or mastery of the game than on the behavior of their teammate. Consistent with a social facilitation theory interpretation (Bowman et al., 2013; Bowman, 2016), the cognitive and behavioral demands of competing in the high challenge game might have pulled attention away from the social elements; in such a scenario, it is possible that teammates’ behaviors do not have a strong influence on subsequent prosocial behaviors regardless of their supportiveness.20

These possibly different interpretations of teammate behaviors in hard and easy difficulty settings may also require different or more nuanced manipulations of supportive teammates during video game play and alternative measures of prosocial postgame behaviors. For example, when in-game behaviors are not sufficient or effective at influencing players’ subsequent reciprocity expectations then other aspects of social video game play may carry more significance for players. In the study by Velez (2015) confederates asked whether they should “pass more” or “set more screens” after each attempted helpful behavior, which provided two additional attempts at collaborative dialogue that was absent from the current study. Perhaps collaborative or supportive dialogue may be more effective at drawing players’ attention toward helpful teammates under circumstances of difficult game play. In regards to alternative post-game measures, it is important to remember that reciprocal interactions are often predicated on equal contributions from interactants. However, it is possible that, in comparison with the less practiced and competent participants, the skillful confederates created an environment of inequality between teammates, particularly in the current study’s manipulation of game difficulty. The prisoner’s dilemma game used in the current study adequately examines prosocial behaviors between interactants of similar standing but may not have been an appropriate prosocial measure in the current study given the dominance of the confederates’ contributions (e.g., points scored and defense against the opposing team) and the resulting reliance of participants on confederates. Future research should examine possible alternative measures, such as a dictator game (Guala & Mittone, 2010), which may be more suitable for examining the effects of social interactions in which one person takes a leading and dominant role.

Of course, our results have not only methodological, but also theoretical implications. Previous research (Velez, 2015; Velez & Ewoldsen, 2013; Velez, Greitemeyer, Whitaker, Ewoldsen, & Bushman, 2016) has advocated BGR as an appropriate theoretical background for examining the dynamics of social video game behaviors, particularly in comparison with other theoretical frameworks that overlook the social implications of how players treat each other during video games (e.g., social identity theory, general learning model, Deutsch’s theory of cooperation and competition; see Velez et al., 2016 for further examples and elaboration). Aside from the need to more systematically identify, include (on the theoretical level), and take into account (on the methodological level) relevant boundary conditions and potential moderators, it is important to discuss additional or alternative theoretical approaches in order to better understand why the predictions we made in our hypotheses for this study might have been wrong or at least imprecise.

As suggested in previous research (Velez et al., 2016), other theoretical frameworks outside or related to BGR may be needed to examine increasingly complex social video game interactions. For example, interdependence theory (Kelley & Thibaut, 1978) has been suggested as a useful theory for research on cooperative play that may be used to examine the moderating role of players being more or less dependent on teammates for success, similar to interactions typically found in hard and easy game settings between players of unequal skill levels. Furthermore, interdependence theory suggests players’ comfortableness with this vulnerability or responsibility likely influences subsequent prosocial behaviors. To connect the methodological and theoretical implications of our study, future research examining hard and easy social video game play should utilize the moderators and mediators suggested by interdependence theory given the current study’s unexpected variations in team scores, wins versus losses (see McGloin, Hull, & Christensen, 2016) and points scored by confederates.

In sum, there are several potential reasons why the predictions we made in our Hypotheses 1–4b were wrong. It may be that our predictions were imprecise as there are relevant boundary conditions that we did not take into account, such as previous game experience and skill, the personal relevance of success in the game or an imbalance of power in the player interactions. Testing this would require alternative methodological approaches (some of which we have outlined). It may also be that BGR has less explanatory power for complex social interactions in video games and their effects than we previously assumed, and using and explicitly testing the predictions made by other theories that aim to explain cooperation and prosocial behavior, such as interdependence theory, would be a way to address this in future research. While our findings do not inherently invalidate BGR as a useful theoretical framework for studying cooperative play and prosocial behavior in video games, they do suggest that the generalizability of BGR to complex social video game interactions is potentially limited, and future research should pull from other theories geared toward understanding dynamic social interaction. Finally, given the methodological limitations of the study we discussed earlier, the administered video game session may have been too weak to influence social–cognitive processes the way we expected. Verifying this assumption would require additional studies with alternative and potentially stronger manipulations of cooperative behavior and possibly also other, more subtle or nuanced, measures of prosocial behavior.

The authors thank Jennifer Meier, Jennifer Suckow, Benedikt Senk, Nadine Jarosch, Fabian Macholdt, Ilya Botkin, and Nina van Doorn for their work as experimenters and confederates in this study.

Johannes Breuer (PhD, 2013) is a postdoctoral researcher at the professorship for media and communication psychology at the University of Cologne (Germany) and the project “Redefining Tie Strength” (ReDefTie) at the Leibniz-Institut für Wissensmedien (Knowledge Media Research Center), Tübingen (Germany). His research interests include the uses and effects of video games, learning with new media, and methods of media effects research.

John Velez is an Assistant Professor in the College of Media and Communication at Texas Tech University, USA. His Ph.D. from the Ohio State University in 2014 focused on Mass Communication Uses and Effects. His research explores the psychological processes underlying new media selection and effects. His primary focus examines prosocial effects of video games.

Nick Bowman is an associate professor in the Department of Communication Studies at West Virginia University, USA. He received his PhD in communication with an emphasis on media psychology from Michigan State University in 2010. His research focuses on the cognitive, emotional, behavioral, and social demands of interactive media such as video games and social media.

Tim Wulf is a PhD student at the University of Cologne, Germany. He works as a research assistant at the University of Mannheim, Germany and his PhD project on media use and nostalgia is funded by the Foundation of German Business (sdw). His research interests include the role of cooperation and competition in video games and media-induced nostalgia.

Gary Bente (PhD) is Professor of Media and Communication in the Department of Psychology, University of Cologne and appointed as Professor at the Department of Communication, Michigan Sate University. His research interests include nonverbal behavior and person perception in face-to-face as well as mediated interactions, Virtual Reality as a research tool and an emergent communication medium, and emotional and cognitive media effects.

1Details about our a priori power analysis can be found in the preregistration document for this study, available via the Open Science Framework (OSF): https://osf.io/5ubwm/

2At the institution of the first author, psychology students have to participate in a certain number of studies (measured in hours). With some buffer time, we rewarded psychology students with credits for 1 hr.

3The reason we did not meet the targeted sample of N = 148 is that it was originally planned to distribute the data collection across the institutions of the first, second, and third author of this paper; however, due to a delay in the submission and revision process for the preregistration document for this study, data collection was only possible at the institution of the first author as the semester break had already started at the other two institutions when the data collection phase began.

4If at least one of these attempts was successful, the confederates were free to use this move again at any time.

5Besides not fully meeting the targeted sample size, this change of exclusion criteria was the only major deviation from our preregistration document. A detailed list of deviations – both major and minor – along with explanations for those can be found in our OSF project for this study (https://osf.io/db9af/).

6Participants who do not (currently) play video games were asked to enter a 0 into the corresponding box in the online questionnaire. Removing these n = 63 individuals increased our sample’s average video game use per week to 9.35 hr (SD = 8.25).

7We also ran all of our confirmatory analyses (i.e., the manipulation checks and tests of our hypotheses) with all of the original exclusion criteria applied (N = 98). The results of these analyses are available as a separate JASP file in the OSF project.

8To provide some orientation for readers who are not familiar at all with Bayesian hypothesis testing: Bayes factors are measures of the strength of the relative evidence in the data for a certain hypothesis (Morey, 2014). More specifically, “Bayes factors provide a numerical value that quantifies how well a hypothesis predicts the empirical data relative to a competing hypothesis” (Schönbrodt, 2014). In the case of the BF10 that we will report for the Bayesian independent samples t tests in the results section, higher numbers indicate stronger evidence for the alternative hypothesis, while numbers < 1 provide more evidence for the null hypothesis the closer they are to 0 (Lakens, 2014). The BF01, on the other hand, indicates support for the null hypothesis (i.e., a higher BF01 means stronger support for the null hypothesis). The BF01 is simply the multiplicative inverse of the BF10 and vice versa (i.e., BF01 = 1/BF10 and BF10 = 1/BF01).

9For very large or very small numbers, JASP uses the e-notation. The exact BF10 for difficulty was 5.278e + 16 (i.e., 5.278 × 1016). To keep the numbers (and tables) in a readable format, we chose to report the Bayes factor as > 1,000 or < .001 if they are larger or smaller than these values.

10For the Bayesian analyses we used the default settings in JASP: A Cauchy prior width of 0.707 for the independent-samples t tests, an r-scale of 0.5 for fixed and 1 for random effects in the ANOVAs, and an r-scale of 0.354 for the predictors in the regression analyses.

11There was stronger evidence for the effect of the difficulty manipulation on self-rated difficulty in the smaller sample using all four exclusion criteria (Cohen’s d = 0.69, BF10 = 29.86).

12Although some authors have criticized the use of labels to categorize the evidence provided by Bayes factors (Morey, 2015; Rouder, Speckman, Sun, Morey, & Iverson, 2009), we will use the categories proposed by Lee and Wagenmakers (2013) to provide some guidance; especially for readers who are unfamiliar with Bayesian hypothesis testing. Schönbrodt (2015) contrasts these views and provides a handy “grades of evidence cheat sheet.”

13In the smaller sample (i.e., with all four of the original exclusion criteria applied) the effect of the supportiveness manipulation on sympathy ratings for the confederate was noticeably larger (Cohen’s d = 0.99, BF10 > 1,000).

14Unsurprisingly, the evidence for this manipulation check was substantially stronger when the fourth exclusion criterion was also applied (Cohen’s d = 1.56, BF10 > 1,000).

15While the means for the expectations were around 6 in all conditions, the most common expectation across conditions was that the teammate would share five coins (n = 37), followed by 10 coins (n = 32).

16The most common amount of coins shared across conditions was 10 (n = 54) and the second most frequent choice was five coins (n = 30).

17There were no differences in the results for any of our hypothesis tests between the sample with all four and the one with only the first three exclusion criteria applied.

18It should be noted that the incidences of winning and losing were distributed unequally across the conditions: In the easy/unsupportive condition, 27 games were won and only one was lost, compared with 22 wins and nine defeats in the easy/supportive condition, two wins and 26 defeats in the hard/unsupportive condition, and also two wins and 26 defeats in the hard/supportive condition.

19In the case of bivariate regression (i.e., if there is only one predictor), BF10 for the model and BFInclusion for the predictor are the same.

20While a 2 × 2 ANOVA with the data from our study did not provide evidence for an interaction effect of the difficulty and supportiveness manipulations on perceived coplayer supportiveness (p = .388, η²p = .01, BFInclusion = 0.26) we cannot rule out that the meaning of support or supportiveness was understood differently by participants (depending not only on the type of challenge that they faced, but also on factors like their own skills).

References

Johannes Breuer, Media and Communication Psychology, University of Cologne, Richard-Strauss-Str. 2, 50931 Cologne, Germany,