Skip to main content
Open AccessShort Research Article

Complex Scenes From the International Affective Picture System (IAPS)

Agreement-Based Emotional Categories

Published Online:https://doi.org/10.1027/1618-3169/a000488

Abstract

Abstract. Complex scenes from standardized stimuli databases such as the International Affective Picture System (IAPS) are organized dimensionally rather than discretely. Further, the potentially unique function of socially relevant scenes is often overlooked. This study sought to identify discrete categories of complex scenes from the IAPS and to explore if there were qualitative features that make the emotional content of some social scenes identifiable with higher levels of agreement. One hundred and three participants (53.4% female, mean age 24.4) judged 118 IAPS scenes as reflecting fear, happy, sad, or neutral. A second judgment study was conducted with a separate group of participants (N = 117; 79.2% female; mean age 30.41) to further characterize valid affective scenes across the full range of basic emotions. Sixty images received agreement on their emotional category from >70% of judges and were considered valid. IAPS identifier codes for these images are available for reference (along with the supplementary material from the second judgment study), organized by emotional and social content. An incidental observation was such that compared to nonsocial scenes, lower agreement rates were observed for social scenes across the board. Qualitative features of social scenes that were classified into emotional categories based on higher levels of agreement are discussed.

Experiments involving the elicitation of emotions have been integral to our understanding of complex interactions between emotional and cognitive processes. Several modalities have been employed to elicit emotions in the laboratory, one commonly used method being that involving the presentation of static visual stimuli. Two types of stimuli are frequently used within this paradigm: photographs of human faces presented in isolation and of naturalistic complex scenes that present a visual array of contextually embedded real-life objects (including people). The latter embodies a movement toward ecological validity, in which the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 2008) represents a key instrument. It offers a database of over a thousand photographs depicting a range of naturalistic complex scenes from inanimate objects to persons embedded in various situations. Slides are tagged with standardized valence values,1 so that experimental stimuli may then be selected based on normative indicators according to whether they are negative, neutral, or positive in emotional content.

In recognition that a single valence scale (i.e., negative to positive) does not capture the range of emotions experienced in day-to-day life, a growing body of researchers have opted to study emotions from a categorical perspective (Finucane, 2011; Francesca et al., 2015; Keltner, Ellsworth, & Edwards, 1993; Pistoia et al., 2010, 2018; von Mühlenen, Bellaera, Singh, & Srinivasan, 2018). This position holds that emotions are better characterized as discrete entities (Eerola & Vuoskoski, 2011). For instance, fear and sadness may both be “negative” emotions but are distinct in the unique subjective experiences and psychological consequences they produce (Zadra & Clore, 2011). Given complex scenes in the IAPS are not categorized according to the discrete emotion they elicit, these images are often qualitatively grouped or ascribed emotional meaning at the discretion of the research team. However, while facial expressions of basic emotions are more likely to be categorized homogeneously among healthy individuals (Wegrzyn, Vogt, Kireclioglu, Schneider, & Kissler, 2017), qualitative judgments of the same scene can vary markedly from one person to the next (Mikels et al., 2005). To ensure more precise experimental manipulation, some investigators have highlighted the need for a panel of judges beyond the research team to validate the emotional content of experimental stimuli (Barke, Stahl, & Kröner-Herwig, 2011; Moreno, Quezada, & Antivilo, 2016; Xu et al., 2017).

In a related line of work, research has highlighted the functional distinction between affective visual stimuli which portray humans and those which do not (Colden, Bruder, & Manstead, 2008; Peterman, Bekele, Bian, Sarkar, & Park, 2015; Silva et al., 2017). These studies are situated within a broader movement toward the study of emotion from an embodied perspective (Colden et al., 2008; Peterman et al., 2015; Rubo & Gamer, 2018; Rutherford, Maupin, & Mayes, 2018; Silva et al., 2017). This perspective recognizes that images that feature people convey unique social information and hold interpersonal relevance (Colden et al., 2008). Such images are attended (Rubo & Gamer, 2018), perceived (Birmingham & Kingstone, 2009), and neurally processed (Rutherford et al., 2018) distinctly from those without humans present. Although affective stimuli based on faces incidentally limit all presented information to those that are socially relevant, complex scenes in the IAPS comprise a mixture of images which portray humans and those which do not. Besides the discrete emotional category to which they belong, there is a need to further delineate these images according to social (or human) content to enable systematic experimental control.

In relation to socially relevant stimuli, inherent prototypes exist to facilitate the classification of facial expressions into emotional categories. For example, an open, smiling mouth is a key feature of a happy face, while v-shaped brows are key features that distinguish an angry face (Aronoff, Woike, & Hyman, 1992). In turn, faces where prototypical features are present are more likely to be identified consistently among healthy observers, with minimal dispute over their emotional categories (Wegrzyn et al., 2017). However, for socially relevant stimuli in the form of emotionally loaded complex scenes, little is known about stimulus-specific properties that may modulate categorization processes.

The first aim of this study was to identify an agreement-based set of discretely categorized complex scenes from the IAPS, presenting these data in a way that will support the study of emotion from an embodied perspective. The following emotions were targeted in a judgment task: fear, happy, sad, and neutral.2 Second, this study also sought to explore if there were qualitative features that make the emotional content of some social scenes identifiable with higher levels of agreement.

Method

Judgment Study 1

Participants

One hundred and three (53.4% female) individuals (judges) aged between 18 and 60 (M = 24.40, SD = 9.99) participated in the current study. The sample was dominantly an Australian undergraduate population (N = 85; 82.5%) recruited from the University of Wollongong (NSW, Australia), the School of Psychology research participation scheme, and also included other members of public within Australia. Where applicable, participants received course credit points for their time. Sample size was selected to match that used in the main IAPS study, where N = 100 (Lang et al., 2008). Fourteen of the 103 participants reported the current use of antidepressants. Along with gender, medication status was tested for effects on the judgment task before data were collapsed across participants, and images were made the main unit of observation (described in detail below).

Procedures

Altogether 118 images were selected from the IAPS (63 social, 55 nonsocial) with the end goal of reducing these images to a smaller set of discrete emotion-eliciting stimuli (based on agreement rates) in the categories of fear, happy, sad, and neutral. Images targeting the emotional categories (fear, happy, sad) were selected thematically based on conceptual items in an established affective word list with categorical norms (affective norms for English words; Stevenson, Mikels, & James, 2007), e.g., danger or assault for fear; achievement or affection for happy; tragedy or grief for sad by the first author. Images targeting the neutral category were selected on the basis of valence ratings close to the midpoint of five as normed in the original IAPS study (Lang et al., 2008). Social images were defined as scenes with at least one clearly visible human form, while images were considered nonsocial only if they did not contain people (or body parts). Exemplars of targeted images for the social subgroup depicted scenes such as man abducting a woman (fear), medal recipients at sports events (happy), people in mourning (sad), and persons engaged in mundane activities such as clerical work (neutral). Exemplars of targeted images for the nonsocial subgroup depicted scenes such as violently capsizing boats (fear), desserts (happy), injured animals (sad), and buildings (neutral).

All data collection took place online at the time of choosing the participants, in a self-paced questionnaire format using Psytoolkit (http://www.psytoolkit.org). In a forced-choice decision format, participants identified the 118 IAPS images (resized to 410 px × 307 px) as either fear, happy, sad, or neutral, in response to the question “Select the category which best corresponds to the image above.” Images were presented until the participant responded and then were replaced by the next image. They were presented in the same pseudo-random order, avoiding clustering of images from the same social content dimension and likely emotional category, as judged by the first author.

The following analyses, including the generation of descriptives and comparisons of group means, were processed with SPSS (Version 25). In total, 12,154 votes were received across 118 images from 103 participants. Before collapsing the dataset across participants to probe emotional categorical data for the 118 images, preliminary checks were performed to ensure that gender and medication status did not influence the proportion of votes across the four labels in the judgment task. To this end, a MANOVA was conducted with gender and medication status as predictors of vote frequency in each of the four labels. Neither gender (Wilk’s Λ = .988, p = .889), medication status (Wilk’s Λ = .978, p = .705), nor their interactive effects (Wilk’s Λ = .985, p = .830) affected the composite multivariate score, suggesting that the proportion of votes across the four labels did not vary as a function of gender or medication status. Henceforth, images were treated as the main unit of observation.

Main Data Analyses

The first aim of this study was to identify an agreement-based set of discretely categorized complex scenes from the IAPS (fear, happy, sad, neutral), presenting these data in a way that will support the study of emotion from an embodied perspective. All 118 images were first grouped according to majority vote or their most frequently occurring label. Following the previously used selection criteria to identify valid emotional stimuli (Dailey, Cottrell, Padgett, & Adolphs, 2003; Francesca et al., 2015; Pistoia et al., 2010, 2018), this battery of images was then reduced to those with rates of agreement exceeding 70%. To ensure that agreement rates across the image groups did not vary according to differences in arousal, a 4 × 2 analysis of covariance (ANCOVA; Emotional × Social content) was conducted on agreement rates with arousal ratings from the original IAPS norming study as a covariate before the selection criterion was applied.

The second aim of this study was to explore if there were qualitative features that make the emotional content of some social scenes identifiable with higher levels of agreement. To this end, social scenes assigned to emotional categories with rates of agreement exceeding 70% were visually scanned for common features. While there is limited literature to draw from regarding specific qualitative features that may potentially reduce ambiguity in the emotional content of social scenes, clarity of facial expressions was used as a starting point of this visual analysis.

Judgment Study 2

Judgment Study 1 employs a forced-choice decision format with constrained response options to identify affective scenes that are assigned the same emotional label more consistently than other scenes (i.e., with >70% agreement rates on their emotional content). However, affective scenes often elicit multiple discrete emotions (Bradley, Codispoti, Sabatinelli, & Lang, 2001), and it may be useful to have this information on hand during stimuli selection procedures. To this end, a second judgment task was run in a separate follow-up study to characterize the profile of emotions (across the full range of basic emotions) elicited by each affective scene that met the selection criterion in Judgment Study 1 (i.e., images classed as fear, happy, or sad with agreement rates above 70%).

Participants

A call for participants was placed on the sub-Reddit r/SampleSize, an online international platform designed to connect researchers and voluntary respondents. Responses from three participants were not analyzed as they did not meet the minimum age requirement for adulthood (18 years). The final participant pool comprised 117 (79.2% female) aged between 18 and 65 (M = 30.41, SD = 10.25) across the following countries: the US (N = 62), the United Kingdom (N = 19), Canada (N = 15), Australia (N = 9), Germany (N = 6), the Netherlands (N = 3), and Sweden (N = 3).

Procedures and Data Analyses

Images that were identified as fear, happy, or sad (with agreement rates above 70%) in Judgment Study 1 were presented sequentially in a page-by-page survey format, with six emotional labels (happy, surprise, sad, anger, disgust, and fear) appearing below each image. Participants were tasked to indicate, on a scale of 1–10, how intensely they felt each of these six emotions when viewing a given image. As per Judgment Task 1, gender and medication status (26 of 117 participants reported the current use of antidepressants) were tested for effects on the judgment task before data were collapsed across participants, and images were made the main unit of observation. Intensity ratings across all six labels did not vary by gender or medication status (mixed model analyses with country modeled as random effects produced the same pattern of findings). Mean intensity ratings for the six emotional labels were thus generated for each rated image using responses from the full sample.

Results

Judgment Study 1

Based on their most frequently occurring labels, the initial 118 images (63 or 53.4% Social) were classified as follows: 15 fear (8 social), 21 happy (15 social), 14 sad (9 social), and 68 neutral (31 social). The excess of neutral images was as intended to minimize viewing fatigue. The 4 × 2 ANCOVA showed that arousal did not predict agreement rates, F(1, 109) = 2.13, p = .147. Unexpectedly, however, emotional content did not predict agreement rates, F(3, 109) = .714, p = .546, nor did the interaction term, F(3, 109) = .757, p = .521, although there was a significant main effect of social content, F(1, 109) = 6.90, p = .010. Precisely, lower agreement rates were obtained for social scenes (M = 68.17%, SE = 3.50) compared to nonsocial scenes (M = 75.52%, SE = 3.39) across the board.3Figure 1 illustrates the dispersion of social and nonsocial images across the full range of agreement rates for each of the four emotional categories.

Figure 1 Dispersion of social/nonsocial images across the full range of agreement rates for each of the four emotional categories based on Judgment Study 1.

After the selection criterion was applied (agreement rates exceeding 70%; reference line added in Figure 1), the initial battery was reduced to 60 images: 7 fear (3 social), 12 happy (7 social), 9 sad (5 social), and 32 neutral (8 social). Since group differences in agreement rates were earlier observed, the same 4 × 2 ANCOVA was repeated to ensure that social and nonsocial scenes in the reduced battery were classified with equal levels of agreement. None of the parameters in this analysis were significant, indicating that agreement rates were comparable across emotional by social content groups and relatively unaffected by arousal ratings. Table 1 presents the IAPS identifier codes, mean agreement rates, and arousal ratings for these 60 images grouped according to emotional and social content. For comprehensiveness, mean valence ratings from the original IAPS norming study are also given. For IAPS identifier codes of all 118 rated scenes, their exact agreement rates, and arousal/valence ratings, see the supplementary material in https://osf.io/z75kj.

Table 1 IAPS identifier codes and mean agreement rates for images with agreement rates >70% based on Judgment Study 1

Toward the second aim, social scenes assigned to emotional categories with rates of agreement exceeding 70% were visually scanned for common features. Within the range above 70%, faces were clearly distinguishable in most scenes, as would be expected for clarity of facial cues to modulate agreement rates. In addition, social scenes in the neutral (8 images) and fear (3 images) categories consistently featured a single person, with one exception in the neutral category (#2396 two strangers in commute at a train station). Sad (4 images) and happy (7 images) social scenes in the above 70% range consistently featured two or more interacting persons, with one exception in the happy category (#8465 man running alone on the beach). Where social scenes in the fear, happy, sad and neutral categories failed to meet the 70% agreement rate mark, their most commonly occurring competing labels were sad, neutral, fear, and happy, respectively (see the supplementary material shown in https://osf.io/z75kj). Possible implications for research are presented in the Discussion section.

Judgment Study 2

Twenty-eight images were classified into an emotional category with agreement rates above 70% in Judgment Study 1 (7 fear, 12 happy, 9 sad). These images were rated on intensity scales (1–10) on six emotional labels (happy, surprise, sad, anger, disgust, and fear). Mean intensity ratings on the six emotional labels for all 28 images individually (organized by emotional and social content) are made available in a second datasheet within the supplementary material. There were two images that had intensity ratings on other basic emotions (surprise, anger, disgust), which exceeded intensity ratings for the emotion they were validated for in Judgment Study 1. These images are marked with an asterisk in the second datasheet within the supplementary material and in Table 1.

Discussion

The first aim of this study was to identify an agreement-based set of discretely categorized complex scenes from the IAPS, presenting these data in a way that will support the study of emotion from an embodied perspective. Selected complex scenes from the IAPS were first grouped according to their most frequently occurring label and then reduced so that each emotional category is represented only by images so assigned with more than 70% agreement among judges. The end product is a battery of images more likely to be identified consistently as fear, happy, sad, or neutral by different viewers. In an experimental context, these images may be better suited to capture the effects of targeted emotions than images assigned to experimental conditions without empirical support. The IAPS identifier codes of these images are made available in the Results section as a starting point of reference to facilitate precise experimental manipulation and comparability across emotion-elicitation studies. Adding to existing categorical data on the IAPS, where complex scenes across thematic contents are treated as homogeneous (Barke et al., 2011; Mikels et al., 2005; Moreno et al., 2016), the current study presents emotional image groups delineated by whether or not they portrayed human persons. In an experimental context, this will support systematic control to account for the functional distinction between stimuli that convey socially relevant information and those that do not (Colden et al., 2008; Peterman et al., 2015; Silva et al., 2017). A strength of the present study is that it used a similarly sized panel of judges to that used to standardize ratings in the IAPS, with comparable gender distributions (NParticipants = 103 and NParticipants = 100 in the current and IAPS studies, respectively; 53.4% and 50% female in the current and IAPS studies, respectively). A second judgment study also served to provide data on the multiple emotion-eliciting properties of scenes presently validated as fear, happy, or sad, which may be useful supplementary information for researchers to have on hand during stimuli selection procedures.

In relation to the second aim, it is worth first noting that lower agreement rates were obtained for social scenes compared to nonsocial scenes across the board. That is, prior to applying the 70% selection criterion to isolate images with high agreement rates, social scenes (relative to nonsocial counterparts) were rated less consistently across judges with regard to their emotional content. Although this observation was incidental to the main aims of the current study, this relative deficiency highlights the importance of better understanding the qualitative features that may make the emotional content of some social scenes less open to dispute. Researchers have previously cautioned that findings from experiments where complex scenes are assigned to emotion-eliciting conditions without procedures to validate their emotional content should be interpreted conservatively (Barke et al., 2011; Moreno et al., 2016; Xu et al., 2017). Current findings suggest this caveat may apply in particular to social scenes.

As may be expected, social scenes that depicted faces of featured persons more clearly tended to generate higher rates of agreement. Besides clarity of facial cues, the number of featured persons appeared to be an additional element that modulated the level of agreement a given scene generated on its emotional content. Neutral and fear social scenes tended to receive agreement rates above 70% if they featured a single person. For neutral and fear social scenes meeting the 70% agreement criterion, the presence of multiple persons most commonly produced competing responses on happy and sad labels, respectively. In contrast, sad and happy scenes tended to receive agreement rates above 70% if they featured at least two interacting persons. For sad and happy scenes, the depiction of a single isolated person most commonly produced competing responses on fear and neutral labels, respectively. Tentatively, these observations suggest that social scenes for neutral and fear categories may be better targeted through single embodiments of facial cues, while sad and happy categories may be better targeted through multiple embodiments of facial cues. Nonetheless, as the second aim was exploratory in nature, no a priori attempts to control for any one feature were made. Thus, it cannot be said that these patterns of clustering were not, in part, due to the nature of specific images selected for the present study until clarified in further research.

The phrasing of instructions given to participants may also be relevant in interpreting the present observations. Across social and nonsocial scenes, participants received instructions to “Select the category which best corresponds to the image above.” While less of a concern for nonsocial scenes, responses tied to social scenes may capture a mixture of how a given scene made the perceiver feel and the perceiver’s judgment of the protagonist(s)’ feelings. Clearer instructions framed to capture the former, as well as paying closer attention to the number of featured persons to enhance selectivity, may yield more balanced social/nonsocial subgroups across emotional categories in future endeavors to extend the current study.

References

  • Aronoff, J., Woike, B. A., & Hyman, L. M. (1992). Which are the stimuli in facial displays of anger and happiness? Configurational bases of emotion recognition. Journal of Personality and Social Psychology, 62, 1050–1066. 10.1037/0022-3514.62.6.1050 First citation in articleCrossrefGoogle Scholar

  • Barke, A., Stahl, J., & Kröner-Herwig, B. (2011). Identifying a subset of fear-evoking pictures from the IAPS on the basis of dimensional and categorical ratings for a German sample. Journal of Behaviour Therapy and Experimental Psychiatry, 43(1), 565–572. 10.1016/j.jbtep.2011.07.006 First citation in articleCrossref MedlineGoogle Scholar

  • Birmingham, E., & Kingstone, A. (2009). Human social attention. Progress in Brain Research, 176, 309–320. 10.1016/s0079-6123(09)17618-5 First citation in articleCrossref MedlineGoogle Scholar

  • Bradley, M. M., Codispoti, M., Sabatinelli, D., & Lang, P. J. (2001). Emotion and motivation II: Sex differences in picture processing. Emotion, 1, 300–319. 10.1037/1528-3542.1.3.300 First citation in articleCrossref MedlineGoogle Scholar

  • Colden, A., Bruder, M., & Manstead, A. S. R. (2008). Human content in affect-inducing stimuli: A secondary analysis of the International Affective Picture System. Motivation and Emotion, 32, 260–269. 10.1007/s11031-008-9107-z First citation in articleCrossrefGoogle Scholar

  • Dailey, M., Cottrell, G. W., Padgett, C., & Adolphs, R. (2003). EMPATH: A neural network that categorizes facial expressions. Journal of Cognitive Neuroscience, 14, 1158–1173. 10.1162/089892902760807177 First citation in articleCrossrefGoogle Scholar

  • Eerola, T., & Vuoskoski, J. K. (2011). A comparison of the discrete and dimensional models of emotion in music. Psychology of Music, 39(1), 18–49. 10.1177/0305735610362821 First citation in articleCrossrefGoogle Scholar

  • Finucane, A. M. (2011). The effect of fear and anger on selective attention. Emotion, 11, 970–974. 10.1037/a0022574 First citation in articleCrossref MedlineGoogle Scholar

  • Francesca, P., Antonio, C., Simona, S., Massimiliano, C., Caterina, P., Benedetta, C., … Marco, S. (2015). Contribution of interoceptive information to emotional processing: Evidence from individuals with spinal cord injury. Journal of Neurotrauma, 32, 1981–1986. 10.1089/neu.2015.3897 First citation in articleCrossref MedlineGoogle Scholar

  • Gerrards-Hesse, A., Spies, K., & Hesse, F. W. (1994). Experimental inductions of emotional states and their effectiveness: A review. British Journal of Psychology, 85(1), 55–78. 10.1111/j.2044-8295.1994.tb02508.x First citation in articleCrossrefGoogle Scholar

  • Gross, J. J., & Levenson, R. W. (1995). Emotion elicitation using films. Cognition and Emotion, 9(1), 87–108. 10.1080/02699939508408966 First citation in articleCrossrefGoogle Scholar

  • Keltner, D., Ellsworth, P. C., & Edwards, K. (1993). Beyond simple pessimism: Effects of sadness and anger on social perception. Journal of Personality and Social Psychology, 64, 740–752. 10.1037//0022-3514.64.5.740 First citation in articleCrossref MedlineGoogle Scholar

  • Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual (Rep. No. A-8). Gainesville, FL: University of Florida. First citation in articleGoogle Scholar

  • Mikels, J. A., Fredrickson, B. L., Larkin, G. R., Lindberg, C. M., Maglio, S. J., & Reuter-Lorenz, P. A. (2005). Emotional category data on images from the International Affective Picture System. Behavior Research Methods, 37, 626–630. 10.3758/bf03192732 First citation in articleCrossref MedlineGoogle Scholar

  • Moreno, C. P., Quezada, V. E., & Antivilo, A. (2016). Identifying fear-evoking pictures from the International Affective Picture System (IAPS) in a Chilean sample. Terapia Psicológica, 34, 209–215. 10.4067/S0718-48082016000300005 First citation in articleCrossrefGoogle Scholar

  • Peterman, J. S., Bekele, E., Bian, D., Sarkar, N., & Park, S. (2015). Complexities of emotional responses to social and non-social affective stimuli in schizophrenia. Frontiers in Psychology, 6, 320–320. 10.3389/fpsyg.2015.00320 First citation in articleCrossref MedlineGoogle Scholar

  • Pistoia, F., Conson, M., Carolei, A., Dema, M. G., Splendiani, A., Curcio, G., & Sacco, S. (2018). Post-earthquake distress and development of emotional expertise in young adults. Frontiers in Behavioural Neuroscience, 12, eCollection 2018. 10.3389/fnbeh.2018.00091 First citation in articleCrossrefGoogle Scholar

  • Pistoia, F., Conson, M., Trojano, L., Grossi, D., Ponari, M., Colonnese, C., … Sara, M. (2010). Impaired conscious recognition of negative facial expressions in patients with locked-in syndrome. The Journal of Neuroscience, 30, 7838–7844. 10.1523/jneurosci.6300-09.2010 First citation in articleCrossref MedlineGoogle Scholar

  • Rubo, M., & Gamer, M. (2018). Social content and emotional valence modulate gaze fixations in dynamic scenes. Scientific Reports, 8(1), 3804. 10.1038/s41598-018-22127-w First citation in articleCrossref MedlineGoogle Scholar

  • Rutherford, H. J. V., Maupin, A. N., & Mayes, L. C. (2018). Parity and neural responses to social and non-social stimuli in pregnancy. Social Neuroscience, 14, 545–548. 10.1080/17470919.2018.1518833 First citation in articleCrossref MedlineGoogle Scholar

  • Silva, H. D., Campagnoli, R. R., Mota, B. E. F., Araújo, C. R. V., Álvares, R. S. R., Mocaiber, I., … Souza, G. G. L. (2017). Bonding pictures: Affective ratings are specifically associated to loneliness but not to empathy. Frontiers in Psychology, 8, 1136–1136. 10.3389/fpsyg.2017.01136 First citation in articleCrossref MedlineGoogle Scholar

  • Stevenson, R. A., Mikels, J. A., & James, T. W. (2007). Characterization of the affective norms for English words by discrete emotional categories. Behavior Research Methods, 39, 1020–1024. 10.3758/BF03192999 First citation in articleCrossref MedlineGoogle Scholar

  • von Mühlenen, A., Bellaera, L., Singh, A., & Srinivasan, N. (2018). The effect of sadness on global-local processing. Attention, Perception, & Psychophysics, 80, 1072–1082. 10.3758/s13414-018-1534-7 First citation in articleCrossref MedlineGoogle Scholar

  • Wegrzyn, M., Vogt, M., Kireclioglu, B., Schneider, J., & Kissler, J. (2017). Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLOS ONE, 12, e0177239. 10.1371/journal.pone.0177239 First citation in articleCrossref MedlineGoogle Scholar

  • Xu, Z., Zhu, R., Shen, C., Zhang, B., Gao, Q., Xu, Y., & Wang, W. (2017). Selecting pure-emotion materials from the International Affective Picture System (IAPS) by Chinese university students: A study based on intensity-ratings only. Heliyon, 3, e00389. 10.1016/j.heliyon.2017.e00389 First citation in articleCrossref MedlineGoogle Scholar

  • Zadra, J. R., & Clore, G. L. (2011). Emotion and perception: The role of affective information. Wiley Interdisciplinary Reviews: Cognitive Science, 2, 676–685. 10.1002/wcs.147 First citation in articleCrossref MedlineGoogle Scholar

1Each IAPS slide also comes with standardized ratings of arousal (how calming or alerting an image is) and dominance (extent of a viewer’s perceived control relative to displayed stimulus). While the latter dimension has not been well-explored, the former is often used as a control variable in investigations (including the present) on the effects of other stimulus properties.

2Besides fear and sadness, the full range of basic negative emotions includes anger and disgust. The former was not presently targeted as static visual stimuli are poorly suited for eliciting anger (Gerrards-Hesse, Spies, & Hesse, 1994; Gross & Levenson, 1995; Mikels et al., 2005). Further, disgust was not targeted due to ethical concerns associated with the presentation of offensive or emotionally distressing images. However, for comprehensiveness, a second judgment study was presently conducted to characterize valid affective scenes across the full range of basic emotions (described in detail under “Judgment Study 2” in the Method and Results sections).

3When social and nonsocial scenes were compared for differences in JPEG-compressed file size (i.e., an index of visual complexity), group means did not differ significantly, t(116) = .10, p = .921.

Maryann Wei, University of Wollongong, School of Psychology, 41.125, Northfields Ave, Wollongong, NSW 2522, Australia,