Skip to main content
Free AccessOriginal Article

Hate Speech as an Indicator for the State of the Society

Effects of Hateful User Comments on Perceived Social Dynamics

Published Online:https://doi.org/10.1027/1864-1105/a000294

Abstract

Abstract. Previous research indicates that user comments serve as exemplars and thus have an effect on perceived public opinion. Moreover, they also shape the attitudes of their readers. However, studies almost exclusively focus on controversial issues if they explore the consequences of user comments for attitudes and perceived public opinion. The current study wants to find out if hate speech attacking social groups due to characteristics such as religion or sexual orientation also has an effect on the way people think about these groups and how they think society perceives them. Moreover, we also investigated the effects of hate speech on prejudiced attitudes. To explore the hypotheses and research questions, we preregistered and conducted a 3 × 2 experimental study varying the amount of hate speech (none/few/many hateful comments) and the group that was attacked (Muslims/homosexuals). Results show no effects of the amount of hate speech on perceived public opinion for both groups. However, if homosexuals are attacked, hate speech negatively affects perceived social cohesion. Moreover, for both groups, we find interaction effects between preexisting attitudes and hate speech for discriminating demands. This indicates that hate speech can increase polarization in society.

User comments appearing below news items have become a characteristic feature of online news. Even though only a minority of Internet users actively contribute to online discussions (Ziegele et al., 2018), the majority reads comments at least occasionally (57%; Springer et al., 2015). This makes user comments and potential effects that result from the contact with online discussions a highly relevant topic that has raised the interest of many communication scholars.

One aspect of user comments that have received much attention concerns the low quality of online discussion. Recent numbers indicate that about one-half of German Internet users have already had contact to hateful comments or hate speech online (Landesanstalt für Medien NRW, 2019). Hate speech can be defined as verbal aggression attacking groups or individuals due to belonging to social categories such as race, gender, or sexual orientation (Erjavec & Kovačič, 2012). If it comes to consequences that might result from getting in touch with these type of comments, a number of studies could show that user comments serve as exemplars (Peter et al., 2014; Zerback & Fawzi, 2016) and thus provide a baseline for inferences on public opinion (Neubaum & Krämer, 2016) as well as for the formation of attitudes (Hsueh et al., 2015). Previous studies investigating the role of user comments for perceived public opinion and attitude formation have shown these kinds of effects for controversial issues such as vaccination (Peter et al., 2014), animal testing (Lee & Jang, 2010), or nanotechnology (Hsueh et al., 2015). Thus, it remains unclear if hateful comments attacking social groups also affect perceived public opinion or attitudes toward these groups. If user comments serve as exemplars, it is plausible to assume that hateful comments also have an effect on the perception of the standing of this group in society or attitudes toward this group. Since exploring these kinds of effects of hate speech also provides important insights to understand social dynamics and the role of hateful user comments for polarizing processes, the current study wants to fill this gap in research. Thus, we experimentally explore if the share of hateful comments affects (1) the perceived share of the population and the Facebook user who hold a negative attitude against the social group that is attacked, (2) the perception of social cohesion, and (3) the tendency to hold more extreme attitudes about the social group. In sum, the study extends the understanding of possible consequences that result from hate speech in user comments.

Hate Speech in the Comment Section

Theoretically, user comments have the potential to enable a diverse audience to engage in a well-reasoned discourse through the exchange of different points of view on various issues. But not all comments contribute to such positive outcomes of a discussion. Instead, they can display for instance, “an unnecessarily disrespectful tone toward the discussion forum, its participants, or its topics” (Coe et al., 2014, p. 660), a phenomenon that is referred to as online incivility. This lack of respect can target individuals and violate politeness norms (personal-level incivility) or disrespect democratic, deliberative norms (public-level incivility) (Muddiman, 2017). When it comes to hate speech, there is still no uniform definition even though the term has received a lot of attention both in the scientific context and in public debates. We refer to hate speech as any form of abusive, intimidating, harassing, or hateful expression in online discussions that is directed against people because they are part of a social group (Erjavec & Kovačič, 2012). More precisely, there are four characteristics that distinguish hate speech from other negative forms of online discussions such as incivility, impoliteness, or cyber mobbing. (1) The most common feature in hate speech definitions is the reference to a target (Bilewicz & Soral, 2020; Wilhelm et al., 2020). Hate speech is a discriminating expression directed to a person as a part of a social group or the social group as a whole. In general, any human characteristic can serve as a trigger for this discrimination, but categories that are most commonly referred to in hateful comments are ethnicity, nationality, gender, religion, sexual orientation, or disabilities (Kulkarni et al., 2018; Silva et al., 2016). Other characteristics are traits like political conviction (Erjavec & Kovačič, 2012) or professions (e.g., journalists; Obermaier et al., 2018). Also, (2) hate speech is directed to an individual or a social group that the attacker does not personally know, which would be the case in cyber mobbing (Obermaier et al., 2018) or other forms of online harassment. Moreover, (3) the discriminating expressions serve the purpose to harm those attacked or subordinate the members of the social group that is discriminated against (Guo & Johnson, 2020). Due to this characteristic, it can be considered a specific type of harmful speech (Faris et al., 2016). Further, (4) hate speech can differ with regard to the intensity of the attack. Hate speech comprises all kinds of discriminating expressions, ranging from the repetition of stereotypes to severe forms of name-calling or encouragement of physical violence. This point is especially important with regard to legal regulations which have to determine if expressions of hate speech are still covered by the freedom of expression or have to be deleted or even prosecuted (Sellars, 2016).

Findings for Germany indicate that about 75% of the Internet users have already been confronted with hate in online discussions (Landesanstalt für Medien NRW, 2019) which outlines the importance to investigate the potential effects of hate speech. If hate dominates online discussions this might have negative consequences for dynamics in a society. This assumption is outlined in more detail in the following section.

User Comments – Its Effects on the Perception of Social Dynamics

Exemplification theory mainly focuses on the presence and effects of so-called exemplars in media coverage (Zillmann & Brosius, 2000). Exemplars thereby refer to single persons or events which are typical cases for the issue or the social group at hand. For traditional news media, exemplars have been shown to be highly relevant and even more important than base-rate information for multiple judgments, such as reality perceptions (Zerback & Peter, 2018) and personal assessments (Brosius, 1999). A recent line of research has shown that also user comments can serve as exemplars. As pointed out by Friemel and Dötsch (2015), commenters are considered to be more or less representative of society. This makes them a potential anchor for generalizations and thus reality perceptions such as frequency distributions or dominating attitudes within the society. Lee and Jang (2010) could show that if users get confronted with user comments that are congruent with their own opinion they assume that the society, in general, is also more congenial compared to users who saw comments incongruent with their opinion. Another study by Neubaum and Krämer (2016) shows that the valence of user comments has an effect on the perception of how members on Facebook and society, in general, think about controversial issues. If user comments conveyed a negative slant toward assisted suicide or adoption rights for same-sex couples, participants also assumed that the share of people on Facebook and in the society holding negative attitudes are higher. Thus, the one-time confrontation with an online discussion already seems to have an effect on people’s public opinion perception. In this context, Zerback and Fawzi (2016) extend previous studies with the finding that the amount of comments expressing an opinion is of importance. They report no effects if participants saw only two exemplars expressing an opinion about the topic of eviction of violent immigrants. However, if they were exposed to 10 comments, the opinion of the commenters paralleled with the perceived opinion of Internet users and of the population, even though effects for the latter did not reach significance.

All these studies have in common that they focus on the effects of user comments and perceived public opinion for controversial issues. Thus, even though hateful comments attacking social groups are a prevalent part of online discussions, it needs to be clarified if discriminating comments also translate into the perception that many people negatively think of these groups in reality.

Based on the assumption that user comments serve as exemplars and in line with findings that have been outlined in this section, we assume:

Hypothesis 1 (H1):

An online discussion containing hate speech has a positive effect on the estimated share of (a) Facebook users and (b) the society holding a negative attitude toward the social group that is attacked in the comments compared to an online discussion without hate speech.

Moreover, since it has been shown that the number of comments is also of importance for generalizing from user comments to Facebook members and the society in general (Zerback & Fawzi, 2016), we further hypothesize:

Hypothesis 2 (H2):

The more hate speech an online discussion contains, the higher the estimated share of (a) Facebook users and (b) the society holding a negative attitude toward the social group that is attacked in the comments.

The experience that social groups are attacked in comments might not just have an effect on the frequency distribution of negative attitudes toward this group but also potentially influence the perception of social cohesion in a society. Social cohesion describes a societal state that is characterized by integrating individuals and social groups into a larger collective unit that shares a more or less common value system (Yamamoto, 2011). The concept puts into focus “diverse aspects of the dynamics of social relations, such as social exclusion, participation and belonging” (Novy et al., 2012, p. 1873). As proposed by Friedkin (2004), social cohesion consists of two indicators on the individual-level: (1) individuals’ membership attitudes and (2) individuals’ membership behavior. Parts of membership attitudes are, for example, the level of identification with the collective unit, the desire to be a part of the unit, but also attitudes about other members of the group. Behavioral indicators concern among others decisions to keep, weaken or strengthen the membership in the collective unit. Social cohesion can be considered as a continuum with cohesion at one end of the scale and social dissolution on the other. In a state of dissolution, the level of inclusiveness, but also norms like trust or behavioral indicators such as the willingness to help others are on a low level (Lockwood, 1999).

If people get confronted with hate speech attacking social groups, this is likely to affect individuals’ perception of social cohesion. Hate speech makes it visible that the society is fragmented and shows a low level of inclusion since members of different social groups isolate themselves and even oppose each other. This way, hate speech might contribute to the perception that social cohesion in society is low. Thus, it can be assumed:

Hypothesis 3 (H3):

The amount of hate speech in the comment section has a negative effect on perceived social cohesion.

Attitudinal Effects of Hate Speech in the Comment Section

The confrontation with discriminating and derogatory user comments might not only have an effect on the perception of social dynamics but also on attitudes. More precisely, we want to investigate if and how hate speech affects attitude polarization, which can be defined as moves of attitudes toward more extreme positions (Lord et al., 1979). For several issues, studies on the effects of user comments confirm that the slant of comments affects people’s attitude toward a given issue (Anderson et al., 2018; Sung & Lee, 2015). Concerning hate speech, findings also indicate that exposure to stereotyped content changes the way people think about the group. An experimental study by Hsueh et al. (2015) showed that prejudiced user comments containing stereotypes about Chinese students increased negative feelings toward that group. A study by Winiewsky et al. (2016) extends these findings by showing that consequences go even beyond increased stereotypes. They find that people who are exposed to hate speech (e.g., trans-phobic, anti-immigrant, sexist language) tended to avoid the groups attacked in their personal environment and agreed with measures restricting the legal rights of these groups or excluding them from society. These kinds of effects can be explained with two theoretical approaches. First, according to the idea of media priming, depictions of stereotypes in the media become incorporated into the thinking of those being exposed to this kind of content and subsequently affect stereotyped thinking, judging, and behavior (Ramasubramanian, 2007). Thereby, both implicit stereotypes (i.e., stereotypes which are automatically activated) and explicit stereotypes (i.e., overtly expressed negative attitudes) can increase (Arendt, 2013). Another explanation for the effects of hate speech on polarized attitudes can be found in processes described in mainstreaming and desensitization. Mainstreaming can be defined as the process of shifting the public discourse to a more radical stance (Kallis, 2013). What has once been considered as completely inappropriate and unspeakable becomes normalized and part of the spectrum of diverse opinions (Cammaerts, 2018). This process can also be fueled by comment sections that provide a platform for extreme and anti-democratic positions and this way normalize what have once been repulsive ideas in front of a large audience. This is not only dangerous because of a change of the public discourse but also because it can have an effect on attitudes within society. Soral et al. (2018) find that repeated contact to hate speech leads to desensitization which also affects cognitive and affective reactions to discriminating content. In a series of studies, they could show that the repeated exposure to hate speech decreased hate speech sensitivity and increased prejudice against the group that was attacked. Hate speech seems to shift the boundaries of acceptable attitudes and thus has the potential to make people more open to more radical standpoints. While Soral et al. (2018) found direct effects on (stereotyped) attitudes, other studies confirm the importance of preexisting attitudes (Sung & Lee, 2015). Anderson et al. (2018) explored the effects of uncivil comments on the risk perception of nanotechnology. The effect of incivility depended on the preexisting attitudes toward nanotechnology. Those supporting nanotechnology showed fewer risk perceptions while those with lower support indicated higher levels of risk perception when being confronted with higher levels of incivility. Concerning effects for user comments for democratic processes, the authors conclude:

“Much in the same way that watching uncivil politicians argue on television causes polarization among individuals, impolite and incensed blog comments can polarize online users based on value predispositions utilized as heuristics when processing the blog’s information.” (p. 383)

In sum, it can be concluded that user comments have the potential to affect attitudes. This has been shown for various controversial topics, but also for prejudice against social groups in the comment section. Moreover, preexisting attitudes might be of importance since they have been shown to cause polarizing dynamics. Applied to the context of hate speech this would mean that those with a negative attitude toward the group that is attacked are more likely to assimilate with the position of the discriminating commenters while those having a positive attitude are more likely to reject this position. In short, we assume:

Hypothesis 4 (H4):

The amount of hate speech in the comment section has a positive effect on polarized attitudes toward the group that is attacked.

Method

Procedure and Participants

To test the hypotheses, we conducted an online survey with a 3 × 2 between-subject experimental design. Thereby, we varied both the amount of hate speech in the user comment section (no hate speech, few and many hateful comments) as well as the group that was attacked (Muslims, homosexuals). The comments that were part of the stimulus were pretested in advance of the study. Also, before conducting the experiment, we preregistered the idea of the study, the hypotheses, the stimulus material, the number of necessary participants (based on a power analysis), and the measures that are used for the statistical analysis on osf.org (link to the preregistration: https://osf.io/f9xrh?view_only=9485d27bf787408fa4ea232ce56e5010). Concerning the power analysis, we used the program G*Power version 3.1 (Faul et al., 2007; Test family: F-test; statistical test: analysis of variance (ANOVA) fixed effects, omnibus, one-way, effect size: 0.15, α-error: 0.05, power: 0.95, number of groups six) to find out that we need 888 participants for our hypotheses testing. We assumed small effect sizes since our dependent variables (perceived public opinion, perceived social cohesion, attitude polarization) a rather stable constructs and should therefore not be tremendously affected by a one-time confrontation with eight comments.

The data collection for this study took place between October 21 and November 11, 2019. In sum, 920 Facebook users took part in our online-experiment (Mage = 41.13 years, SD = 14.57; 56% female, 82% at least high school degree) and completed the whole questionnaire. The participants were recruited through an online-access-panel of Internet users from Germany, Switzerland, and Austria (SoSci; Leiner, 2016).

Stimulus

For the experiment, we created six different Facebook newsfeeds including a post and comments underneath the post. We chose Muslims and homosexuals as the groups that were attacked in the comments for two reasons. First, concerning prejudiced attitudes findings indicate that about 25% of the German population hold negative attitudes against Muslims and 12% hold negative attitudes against homosexuals (Küpper et al., 2017). Second, Internet users frequently encounter hateful comments against these groups. According to a study by Geschke et al. (2019), 77% of German Internet users have at least occasionally encountered aggressive or discriminating statements against Muslims and 62% against homosexuals. This emphasizes the importance to investigate the effects of hateful comments against these groups. Thus, the news posts dealt either with statistics of people having a Muslim religious confession or with the Christopher Street Day in Germany. Below both news posts, participants saw an online discussion that consisted of eight comments and varied with regard to the amount of hate speech between the groups. The comments either contained no hate speech, two hateful comments (and six neutral comments), or six hateful comments (and two neutral comments). These decisions can be justified as follows: We decided to show the participant eight comments since the online discussion should not be too long. A shorter online discussion makes it more likely that the participants carefully read the discussion. Also, we wanted to have a noticeable difference between the level of discrimination between the few and many hateful comments conditions. Thereby, participants should in all conditions be confronted with shares of discriminating and neutral comments which can also be found in reality under posts of mainstream news providers. That is why we decided to show participants not exclusively hate speech in the third condition since this cannot be expected under a post by Spiegel Online. In sum, the experimental conditions varied both with regard to the group that was attacked (Muslims/homosexuals) and the amount of hate speech (no/few/many hateful comments). Examples for the stimulus versions as well as a translation of all comments can be found along with other supplementary material at https://osf.io/km4eg/?view_only=886cdc075d904377aedc82f7133d18f6.

The comments used in the experiment were pretested in advance of the experiment. Fifty-two participants were asked to rate user-comments with regards to their degree of discrimination. For Muslims and homosexuals, the six user comments that were evaluated as being most discriminating were chosen for the main study (mean index for comments related to Muslims: M = 4.62, SD = 0.67; homosexuals: M = 4.87, SD = 0.39; scale: 1 = not discriminating, 5 = very much discriminating). Further, the pretest also revealed that the eight neutral comments were evaluated as being not at all discriminating (M = 1.50, SD = 0.48). Moreover, the news posts of the mock newsfeeds were also rated as not discriminating (Muslims: M = 1.69, SD = 0.95; homosexuals: M = 1.15, SD = 0.46). The hate speech comments that were used for the final experiment can be characterized by discriminating statements either against Muslims (e.g., “Islam is an inhuman cult and its followers are all fanatics”) or against homosexuals (e.g., “I don’t want to have anything to do with those fags. I just think it’s disgusting”). Neutral comments did not include discriminating speech (e.g., “Interesting article. I liked the comparisons with previous years”).

Results of the main study also revealed that our manipulation was successful. Participants perceived the comments in the “many” hateful comments condition to be more discriminating against the group that was attacked (Muslims: M = 6.48, SD = 1.05; homosexuals: M = 6.51, SD = 1.21; scale: 1 = do not agree at all, 7 = fully agree) followed by the “few” hate comments condition (Muslims: M = 4.94, SD = 1.68; homosexuals: M = 5.17, SD = 1.70) and the “no” hateful comments condition (Muslims: M = 3.09, SD = 1.95; homosexuals: M = 2.86, SD = 1.68). An ANOVA showed that the group differences varied significantly [Muslims: F(2, 460) = 215.943, p < .001, homosexuals: F(2, 454) = 168.901, p < .001].

Measures

Before the stimulus presentation, we measured preexisting attitudes toward Muslims and homosexuals (negative-positive, 11-point scale slide bar). We chose this single item since we did not want to prime any stereotypes before presenting the stimulus. This might have interfered with the effects that we wanted to investigate. Also, we just wanted to ask for the overall attitude that comes to the participant’s minds if they are asked to judge the groups that were attacked in the stimulus. The same measure can be found in a study by Küpper et al. (2017).

To measure perceived public opinion, participants should estimate the share of people on Facebook as well as in society who hold negative attitudes toward Muslims or homosexuals depending on the stimulus they were exposed to (slide bar from 0 to 100%; also in Neubaum & Krämer, 2016; Zerback & Fawzi, 2016).

For perceived social cohesion participants stated their agreement to three items (“Society falls apart”; “In Germany, more and more people are marginalized”; “In Germany, cohesion is in danger”; 1 = do not agree at all, 7 = fully agree) that we obtained from Zick and Küpper (2012). A mean index was calculated (α = .82).

Attitudes toward the social groups were measured by asking participants about the agreement to different demands (1 = do not agree at all, 7 = fully agree). To measure attitudes toward Muslims, we chose two items from Lee et al. (2013). These were: “Muslims should not be allowed to work at crowded places, such as airports” and “I would support political actions to prevent the building of more mosques.” We further added the item “Muslims should not be allowed to wear headscarves in public institutions.” To measure attitudes toward homosexuals, we used three items obtained from Seise et al. (2002): “Homosexual couples should not be allowed to adopt children”; “Homosexuals should not be allowed to get married”; “Homosexuals should not work with children and adolescents.” For the analyses, these items were considered separately, as they did not show a satisfying internal consistency (Muslims: α = .65, homosexuals: α = .69).

Results

Hypotheses Testing

In H1a it was assumed that hate speech has a positive effect on the estimated share of Facebook users holding a negative attitude toward the social group that is attacked compared to user comments without hate speech. To investigate this hypothesis, we merged the conditions containing hate speech (few and many) and compared the overall mean value for all participants who saw hate speech to the participant in the neutral condition. Thereby, we calculated a t-test. The results show that comments containing hate speech compared to comments containing no hate speech have no effect on the estimated share of Facebook users holding a negative attitude toward Muslims, t(446) = .39, p = .70; Mneutral = 44.63%, SD = 21.36, Mhate = 45.50%, SD = 22.21, or homosexuals, t(458) = −1.01, p = .32; Mneutral = 39.22%, SD = 18.92, Mhate = 37.40%, SD = 18.21. Thus, H1a is not supported.

In H1b it was assumed that hate speech has a positive effect on the estimated share of society holding a negative attitude toward the social group that is attacked compared to user comments without hate speech. The results show that hate speech in comments also has no effect on the estimated share of the society holding a negative attitude toward Muslims, t(453) = −0.25, p = .80; Mneutral = 42.01%, SD = 20.80, Mhate = 41.48%, SD = 20.56. However, hate speech affects the estimated share of the society holding a negative attitude toward homosexuals, t(459) = −2.00, p = .05. Participants that were exposed to no hate speech estimated this share of society slightly higher (M = 39.25%, SD = 19.40) than participants that were exposed to hate speech in the comments (M = 35.53%, SD = 18.81). Thus, H1b is not supported.

In H2a we assumed that the more hate speech an online discussion contains, the higher the estimated share of Facebook users holding negative attitudes toward the social groups that are attacked. However, the amount of hate speech has no effect on the estimated share of Facebook users holding a negative attitude toward Muslims, F(2, 445) = 0.90, p = .41; Mneutral = 44.63%, SD = 21.36, Mfew = 43.91%, SD = 22.41, Mmany = 47.10%, SD = 21.97, or homosexuals, F(2, 457) = .51, p = .60; Mneutral = 39.22%, SD = 18.92, Mfew = 37.29%, SD = 17.69, Mmany = 37.52%, SD = 18.83. Thus, H2a is not supported.

In H2b we assumed that the more hate speech an online discussion contains, the higher the estimated share of society holding negative attitudes toward the social groups that are attacked. Again, we find no effect of the amount of hate speech on the estimated share of the society holding a negative attitude toward Muslims, F(2, 452) = 0.05, p = .96; Mneutral = 42.01%, SD = 20.80, Mfew = 41.28%, SD = 21.09, Mmany = 41.69%, SD = 20.08, or homosexuals, F(2, 458) = 2.05, p = .13; Mneutral = 39.25%, SD = 19.40, Mfew = 35.87%, SD = 19.31, Mmany = 35.15%, SD = 18.28. Thus, H2b is not supported.

In H3 we assumed that the amount of hate speech has a negative effect on perceived social cohesion. We find that the amount of hate speech toward Muslims has no effect, F(2, 454) = 1.92, p = .15; Mneutral = 4.73, SD = 1.41, Mfew = 4.47, SD = 1.41, Mmany = 4.44, SD = 1.33. However, the amount of hate speech toward homosexuals affects the perception of social cohesion, F(2, 460) = 3.76, p = .02, η2 = .02. The post hoc test (Bonferroni) reveals that participants who are exposed to many hate speech comments (M = 4.73, SD = 1.34) have a more negative perception of social cohesion compared to participants that were exposed to no hate speech (M = 4.37, SD = 1.32). However, this difference reaches marginal significance (p = .07) and thus does not confirm the hypothesis. Moreover, the post hoc test (Bonferroni) also reveals that participants that are exposed to few hate speech comments (M = 4.75, SD = 1.46) have a more negative perception of social cohesion compared to participants that were exposed to no hate speech (p = .045). There are no significant differences between the groups receiving few and many hate speech comments (p ~ 1.00). Thus, H3 can only be supported for the homosexual topic and only when comparing no hate speech to few hate speech comments.

In H4 we assumed that the amount of hate speech in the comment section has a positive effect on polarized attitudes toward the group that is attacked. Due to the low α-values, we cannot test the hypotheses as previously intended. However, we decided to do additional exploratory analyses which will be described in the following section.

Exploratory Analyses

To learn something about attitude polarization, we decided to investigate each of the statements separately for both social groups. They served as the dependent variables in the following analyses. As independent variables, we used the stimulus version, preexisting attitudes toward the social groups, and the interaction term of both independent variables. Starting with Muslims, regression analyses show for the statement “Muslims should not be allowed to work at crowded places, such as airports” that only preexisting attitudes predict the agreement with these statements while (the amount of) hate speech has no effect (see Table 1). Thus, participants who hold more negative attitudes toward Muslims agree more with the statement. For the statements “I would support political actions to prevent the building of more mosques” and “Muslims should not be allowed to wear headscarves in public institutions” both the amount of hate speech as well as the preexisting attitudes affect the agreement (see Tables 2 and 3). Further, we find interaction effects. To investigate these effects, we separated the participants into three groups based on the preexisting attitudes toward Muslims (1–5: negative, 6–7: neutral, 8–11: positive) and plotted the interactions. For both statements, the graphs imply that participants who hold negatives attitudes against Muslims agree more with this statement when they encounter (few or many) hate speech comments compared to participants holding positive or neutral attitudes (see Figures 1 and 2). That means hate speech affects especially those with negative attitudes toward Muslims. The confidence intervals show the significant differences.

Figure 1 Interaction between preexisting attitudes and amount of hate speech (HS) on the agreement to the statement “I would support political actions to prevent the building of more mosques.” nnegative = 77, nneutral = 256, npositive = 124.
Figure 2 Interaction between preexisting attitudes and amount of hate speech (HS) on the agreement to the statement “Muslims should not be allowed to wear headscarves in public institutions.” nnegative = 77, nneutral = 256, npositive = 124.
Table 1 Regression analysis for predicting agreement to the statement “Muslims should not be allowed to work at crowded places, such as airports”
Table 2 Regression analysis for predicting agreement to the statement “I would support political actions to prevent more mosques to be build”
Table 3 Regression analysis for predicting agreement to the statement “Muslims should not be allowed to wear headscarves in public institutions”

For homosexuals, regression analyses show for the statements “Homosexual couples should not be allowed to adopt children” and “Homosexuals should not be allowed to get married” that only preexisting attitudes predict the agreement with these statements (see Tables 4 and 5). Thus, participants who hold more negative attitudes toward homosexuals agree more with the demands for restrictions. For the statement “Homosexuals should not work with children and adolescents” the amount of hate speech affects the agreement (see Table 6). Further, we find an interaction effect. To investigate this effect we built three groups based on the preexisting attitudes of the participants (1–6: negative, 7–9: neutral, 10–11: positive). This time, we chose different attitude values to build the groups because the sample contained only a few people with a negative attitude toward homosexuals. The graphs imply that participants who hold negative and neutral views toward homosexuals agree more to the statement when encountering many hateful comments compared to those with positive preexisting attitudes (see Figure 3).

Figure 3 Interaction between preexisting attitudes and amount of hate speech (HS) on the agreement to the statement “Homosexuals should not work with children and adolescents.” nnegative = 113, nneutral = 123, npositive = 22.
Table 4 Regression analysis for predicting agreement to the statement “Homosexual couples should not be allowed to adopt children”
Table 5 Regression analysis for predicting agreement to the statement “Homosexuals should not be allowed to get married”
Table 6 Regression analysis for predicting agreement to the statement “Homosexuals should not work with children and adolescents”

Discussion

The comment section is not just a place where different points of view are exchanged in a respectful manner. Instead, users also spread discriminating content against social groups because of features such as religion, sexual orientation, gender, or disabilities which is referred to as hate speech (Erjavec & Kovačič, 2012). Previous research found that user comments serve as exemplars (Peter et al., 2014) that shape the perception of public opinion (Neubaum & Krämer, 2016) and affect attitudes of readers of user comments (Hsueh et al., 2015). However, all these effects have been found for controversial issues. Thus, it remained unclear if the attacking of social groups has an effect on perceived public opinion and the formation of attitudes. Since investigating the consequences of hate speech is important to understand its potential role for destructive social dynamics such as the formation of prejudices or polarized attitudes, the present study wanted to fill this gap in research.

For this purpose, we conducted an experimental study varying both the amount of hate speech and the group that was attacked in the comments. Based on previous findings, we assumed effects for perceived public opinion, perceived social cohesion as well as polarized attitudes toward the group that was attacked.

If it comes to effects for perceived public opinion, our study cannot confirm that discriminating comments serve as exemplars. Even though previous studies with a very similar experimental setting report that the one-time confrontation with user comments affects how participants rate the share of people opposing or supporting assisted suicide, adoption for same-sex couples (Neubaum & Krämer, 2016) or the eviction of violent immigrants (Zerback & Fawzi, 2016), this does not hold true for the perception of social groups that have been attacked in comments. All groups overestimated the share of people holding negative attitudes toward Muslims and homosexuals. A possible explanation might be that participants already had an awareness of the presence of hate speech. The majority of Internet users (75%) report to have already noticed hate in the comment section (Landesanstalt für Medien NRW, 2019). Moreover, the political and legal interventions against online hate such as the Network Enforcement Act or the lawsuit of prominent figures such as the politician Renate Kuenast in Germany have been intensively discussed in the news. As a result, participants might have based their judgments on previous encounters with hate speech which explains why the presented discussion in the experiment did not have an effect. Maybe exemplification effects of comments resulting from a one-time confrontation just occur if people have less previous experience with a certain slant in the comment section. This could explain why we did not detect any effects for a perceived public opinion while other studies do report them.

For perceived social cohesion, the attacking of Muslims did not affect how the participants judge the state of the society. This might also be a result of the previous contact with hate speech attacking Muslims. Results by Geschke et al. (2019) indicate that users have noticed hate against Muslims more frequently compared to hate against homosexuals in online environments. Following the idea of desensitization (Soral et al., 2018), repeated exposure to hate speech decreases sensitivity for hate speech. In other words, it makes people get used to that kind of attack which can also cause a decline of attention or sympathy for the victim that is offended (Linz et al., 1989). Maybe processes of desensitization also explain the lack of effect for perceived social cohesion: since people have been frequently exposed to such content, they are not shocked by the comments anymore and do not apply their negative feelings when judging the state of the society. However, hate speech against homosexuals had an effect on the perception of social cohesion. If the comment section contained some discriminating and hateful comments against this social group, participants perceived social cohesion to be lower. As assumed, hate in the comment section attacking others makes it visible that parts of the society are excluded from the collective unit which probably strengthens the perception that social cohesion is low. However, for many hateful comments attacking homosexuals, we could not confirm this effect. A possible explanation could be that if hate dominates the comment section to such a high degree, it is not assumed as representative of society anymore. This perception, or the assumption that such a high amount of hate is a rather unrealistic scenario, could have affected effects for perceived social cohesion in this condition. In general, it should be considered that effect sizes of the significant findings were very low indicating that also for homosexuals, hate speech is not an important determinant for the perception of social cohesion.

If it comes to polarizing attitudes, we found for both groups that the effects of hate speech depend on the specific demand that is investigated. The attacking of Muslims made people with negative and neutral attitudes toward this group oppose the building of mosques more strongly and agree more with the statement that Muslims should not be allowed to wear headscarves in public institutions. If homosexuals got attacked, people with a rather neutral and negative attitude tended to agree more that they should not work with children and adolescents. Polarizing effects occurred to a higher degree for Muslims than for homosexuals since the mean differences between the groups were bigger and effects were significant for two compared to just one item. For the significant statements, effects occurred that can be explained with media priming and desensitization. The discriminating attacks of these groups might have primed stereotypes and lowered the sensitivity for prejudice against both Muslims and homosexuals. This made people more open to agreeing with stereotyped statements themselves. The fact that these effects were only found for people with neutral or negative preexisting attitudes indicates that hate speech can contribute to polarizing tendencies in a society. Also, since prejudice for those with neutral or negative preexisting attitudes increased compared to the control group, social cohesion seems to be negatively affected. These participants tended to show more distrust toward homosexuals and want them to be excluded from specific fields of a society which indicates a state of social dissolution (Lockwood, 1999). Thus, based on these findings it can be concluded that hate speech can contribute to polarization and also negatively affect social cohesion in a society. This could be shown for two different social groups that have been attacked in the comment section. However, it depends on the specific demands if attitudes get more extreme after the confrontation with hate speech. It is up to future research to investigate which characteristics of demands cause them to be more likely to be influenced by hate speech.

Naturally, our study does not come without limitations. First, the experiment investigated the effects of hate speech in an artificial situation. Participants got confronted with hateful comments in a newsfeed that was not their own. Moreover, we tried to make the comments as realistic as possible but we also had to make sure that the manipulation is not confounded. As a result, it was not possible to include elements such as emoticons (e.g., like, love, anger) or replies to comments even though both elements are common features in the comment section. This limits the external validity of the results.

Moreover, we were unable to create an index for attitudes toward Muslims/homosexuals since reliability scores for both groups were too low. That indicates that the statements that were used captured different facets of stereotyped attitudes which seem to be differently affected by hate speech. Maybe the concept of polarized attitudes was in general too broad. Future studies could focus more specifically on specific types of resentments that can then be measured with several items. This would make the impact of hate speech clearer and also enable more reliable measurement of the dependent variable.

Another critical point concerns the sample of the study. We relied on the SoSci-Panel which provides participants with a similar distribution of age and gender as in the German-speaking population. However, the sample contains more people with a high level of education and college degree than usual in the general population. The educational bias might provide an explanation why there were only a few participants with negative attitudes toward Muslims and almost no one with a negative attitude toward homosexuals. However, we find that especially people with negative attitudes toward the social groups are influenced by hateful comments. That means, it is likely that our study underestimates the effects of hate speech due to this bias. Thus, it is important to replicate our study with a sample that shows a more realistic distribution of attitudes toward Muslims as well as homosexuals. It is plausible to assume that polarizing effects of hate speech might even be stronger in a more representative sample and also in society.

In sum, our study is among the first to investigate how hate speech affects the perception of social dynamics as well as effects on attitudes toward social groups. It can be concluded that hate speech can have destructive societal consequences and the fight against online hate needs to be taken seriously. This concerns first of all the news organizations which are mainly responsible for news posts and news articles reaching a large audience. The findings stress the importance of an effective moderation of user comments that detects and responds to hate speech. This would reduce the negative effects of online discussion while it is still possible for users to engage in civil discourse.

Further, the results of the study also emphasize the importance of political and legal interventions against hate speech. Laws such as the Network Enforcement Act or the establishment of special investigation departments have marked important steps in this regard. However, it would also be very important to develop and legally determine definitions of hate speech that enables the prosecution of those spreading severe forms of hate and discrimination online. A solution for less severe forms of hate speech could be to encourage other users to engage in counter speech. Answering hateful comments in a civic manner could overcome the negative effects of hate speech without the necessity of legal intervention.

In sum, the study underlines the importance for researchers, politicians, journalists, and Internet users to increase efforts to reduce hate speech in user comments to a minimum. This is not just necessary to protect those who get attacked in comments but also contributes to more social cohesion in a society.

Svenja Schäfer is a postdoctoral researcher at the Political Communication Research Group, Department of Communication, University of Vienna, Austria. Her research interest includes the use and effects of news in digital environments, especially social network sites and user comments.

Michael Sülflow is a postdoctoral researcher at the Department of Communication Research, Johannes Gutenberg University Mainz, Germany. His research interest includes Political Communication as well as the contents and effects of Visual Communication.

Liane Reiners is a research assistant at the Department of Communication Research, Johannes Gutenberg University Mainz, Germany. Her research interest includes forms and effects of hate speech, stereotypes, and prejudices toward different social groups.

References

  • Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30(1), 156–168. https://doi.org/10.1093/ijpor/edw022 First citation in articleCrossrefGoogle Scholar

  • Arendt, F. (2013). Dose-dependent media priming effects of stereotypic newspaper articles on implicit and explicit stereotypes: Dose-dependent media priming effects. Journal of Communication, 63(5), 830–851. https://doi.org/10.1111/jcom.12056 First citation in articleCrossrefGoogle Scholar

  • Bilewicz, M., & Soral, W. (2020). Hate speech epidemic. The dynamic effects of derogatory language on intergroup relations and political radicalization. Political Psychology, 41(1), 3–33. https://doi.org/10.1111/pops.12670 First citation in articleCrossrefGoogle Scholar

  • Brosius, H.-B. (1999). Research note: The influence of exemplars on recipients’ judgements: The part played by similarity between exemplar and recipient. European Journal of Communication, 14(2), 213–224. https://doi.org/10.1177/0267323199014002004 First citation in articleCrossrefGoogle Scholar

  • Cammaerts, B. (2018). The mainstreaming of extreme right-wing populism in the low countries: What is to be done? Communication, Culture and Critique, 11(1), 7–20. https://doi.org/10.1093/ccc/tcx002 First citation in articleCrossrefGoogle Scholar

  • Coe, K., Kenski, K., & Rains, S. A. (2014). Online and uncivil? Patterns and determinants of incivility in newspaper website comments. Journal of Communication, 64(4), 658–679. https://doi.org/10.1111/jcom.12104 First citation in articleCrossrefGoogle Scholar

  • Erjavec, K., & Kovačič, M. P. (2012). “You don’t understand, this is a new war!”: Analysis of hate speech in news web sites’ comments. Mass Communication and Society, 15(6), 899–920. https://doi.org/10.1080/15205436.2011.619679 First citation in articleCrossrefGoogle Scholar

  • Faris, R., Ashar, A., Gasser, U., & Joo, D. (2016). Understanding harmful speech online. Berkman Klein Center for Internet & Society Research Publication. http://nrs.harvard.edu/urn-3:HUL.InstRepos:38022941 First citation in articleCrossrefGoogle Scholar

  • Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146 First citation in articleCrossrefGoogle Scholar

  • Friedkin, N. E. (2004). Social cohesion. Annual Review of Sociology, 30(1), 409–425. https://doi.org/10.1146/annurev.soc.30.012703.110625 First citation in articleCrossrefGoogle Scholar

  • Friemel, T. N., & Dötsch, M. (2015). Online reader comments as indicator for perceived public opinion. In M. EmmerC. StrippelEds., Kommunikatonspolitik für die digitale Gesellschaft (pp. 151–172). DGPuK. First citation in articleGoogle Scholar

  • Geschke, D., Klaßen, A., Quent, M., & Richter, C. (2019). #Hass im Netz: Der schleichende Angriff auf unsere Demokratie: Eine bundesweite repräsentative Untersuchung [#Hate online: The creeping attack on our democracy: A nationwide representative study]. https://www.idz-jena.de/fileadmin/user_upload/_Hass_im_Netz_-_Der_schleichende_Angriff.pdf First citation in articleGoogle Scholar

  • Guo, L., & Johnson, B. G. (2020). Third-person effect and hate speech censorship on Facebook. Social Media + Society, 6(2), 205630512092300. https://doi.org/10.1177/2056305120923003 First citation in articleCrossrefGoogle Scholar

  • Hsueh, M., Yogeeswaran, K., & Malinen, S. (2015). “Leave Your Comment Below”: Can biased online comments influence our own prejudicial attitudes and behaviors? Human Communication Research, 41(4), 557–576. https://doi.org/10.1111/hcre.12059 First citation in articleCrossrefGoogle Scholar

  • Kallis, A. (2013). Breaking taboos and “mainstreaming the extreme”: The debates on restricting Islamic symbols in contemporary Europe. In R. WodakM. KhosraviNikB. MralEds., Right-wing populism in Europe (pp. 55–70). Bloomsbury Academic. First citation in articleGoogle Scholar

  • Kulkarni, V., ElSherief, M., Nguyen, D., Wang, W. Y., & Belding, E. (2018, June 1). Hate lingo: A target-based linguistic analysis of hate speech in social media. https://arxiv.org/abs/1804.04257 First citation in articleGoogle Scholar

  • Küpper, B., Klocke, U., & Hoffmann, L.-C. (2017). Einstellungen gegenüber lesbischen, schwulen und bisexuellen Menschen in Deutschland: Ergebnisse einer bevölkerungsrepräsentativen Umfrage [Attitudes towards lesbian, gay and bisexual people in Germany: Results of a representative survey]. Nomos. First citation in articleGoogle Scholar

  • Landesanstalt für Medien NRW. (2019). Ergebnisbericht: Forsa-Befragung zu Hate Speech [Results report: Forsa survey on hate speech]. https://www.medienanstalt-nrw.de/fileadmin/user_upload/lfm-nrw/Service/Pressemitteilungen/Dokumente/2019/forsa_LFMNRW_Hassrede2019_Ergebnisbericht.pdf First citation in articleGoogle Scholar

  • Lee, E.-J., & Jang, Y. J. (2010). What do others’ reactions to news on internet portal sites tell us? Effects of presentation format and readers’ need for cognition on reality perception. Communication Research, 37(6), 825–846. https://doi.org/10.1177/0093650210376189 First citation in articleCrossrefGoogle Scholar

  • Lee, S. A., Reid, C. A., Short, S. D., Gibbons, J. A., Yeh, R., & Campbell, M. L. (2013). Fear of Muslims: Psychometric evaluation of the Islamophobia Scale. Psychology of Religion and Spirituality, 5(3), 157–171. https://doi.org/10.1037/a0032117 First citation in articleCrossrefGoogle Scholar

  • Leiner, D. J. (2016). Our research’s breadth lives on convenience samples: A case study of the online respondent pool “SoSci Panel”. Studies in Communication | Media, 5(4), 367–396. https://doi.org/10.5771/2192-4007-2016-4-367 First citation in articleCrossrefGoogle Scholar

  • Linz, D., Donnerstein, E., & Adams, S. M. (1989). Physiological desensitization and judgments about female victims of violence. Human Communication Research, 15(4), 509–522. https://doi.org/10.1111/j.1468-2958.1989.tb00197.x First citation in articleCrossrefGoogle Scholar

  • Lockwood, D. (1999). Civic integration and social cohesion. In I. GoughG. OlofssonEds., Capitalism and social cohesion (pp. 63–84). Palgrave Macmillan UK. https://doi.org/10.1057/9780230379138_4 First citation in articleGoogle Scholar

  • Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098 First citation in articleCrossrefGoogle Scholar

  • Muddiman, A. (2017). Personal and public level of incivility. International Journal of Communication, 11, 3182–3202. First citation in articleGoogle Scholar

  • Neubaum, G., & Krämer, N. C. (2016). Monitoring the opinion of the crowd: Psychological mechanisms underlying public opinion perceptions on social media. Media Psychology, 20(3), 502–531. https://doi.org/10.1080/15213269.2016.1211539 First citation in articleCrossrefGoogle Scholar

  • Novy, A., Swiatek, D. C., & Moulaert, F. (2012). Social cohesion: A conceptual and political elucidation. Urban Studies, 49(9), 1873–1889. https://doi.org/10.1177/0042098012444878 First citation in articleCrossrefGoogle Scholar

  • Obermaier, M., Hofbauer, M., & Reinemann, C. (2018). Journalists as targets of hate speech. How German journalists perceive the consequences for themselves and how they cope with it. Studies in Communication and Media, 7(4), 499–524. https://doi.org/10.5771/2192-4007-2018-4-499 First citation in articleCrossrefGoogle Scholar

  • Peter, C., Rossmann, C., & Keyling, T. (2014). Exemplification 2.0: Roles of direct and indirect social information in conveying health messages through social network sites. Journal of Media Psychology, 26(1), 19–28. https://doi.org/10.1027/1864-1105/a000103 First citation in articleLinkGoogle Scholar

  • Ramasubramanian, S. (2007). Media-based strategies to reduce racial stereotypes activated by news stories. Journalism & Mass Communication Quarterly, 84(2), 249–264. https://doi.org/10.1177/107769900708400204 First citation in articleCrossrefGoogle Scholar

  • Seise, J., Banse, R., & Neyer, F. J. (2002). Individuelle Unterschiede in impliziten und expliziten Einstellungen zur Homosexualität [Individual differences in terms of implicit and explicit attitudes toward homosexuality: An empirical study]. Zeitschrift für Sexualforschung, 15(1), 21–42. https://doi.org/10.1055/s-2002-25178 First citation in articleCrossrefGoogle Scholar

  • Sellars, A. (2016). Defining hate speech, Berkman Klein Center Research Publication No. 2016-20, Boston University School of Law, Public Law Research Paper No. 16-48. http://dx.doi.org/10.2139/ssrn.2882244 First citation in articleGoogle Scholar

  • Silva, L., Mondal, M., Correa, D., Beneventura, F., & Weber, I. (2016, May 1). Analyzing the targets of hate in online social media. https://arxiv.org/abs/1603.07709 First citation in articleGoogle Scholar

  • Soral, W., Bilewicz, M., & Winiewski, M. (2018). Exposure to hate speech increases prejudice through desensitization. Aggressive Behavior, 44(2), 136–146. https://doi.org/10.1002/ab.21737 First citation in articleCrossrefGoogle Scholar

  • Springer, N., Engelmann, I., & Pfaffinger, C. (2015). User comments: Motives and inhibitors to write and read. Information, Communication & Society, 18(7), 798–815. https://doi.org/10.1080/1369118X.2014.997268 First citation in articleCrossrefGoogle Scholar

  • Sung, K. H., & Lee, M. J. (2015). Do online comments influence the public’s attitudes toward an organization? Effects of online comments based on individuals’ prior attitudes. The Journal of Psychology, 149(3–4), 325–338. https://doi.org/10.1080/00223980.2013.879847 First citation in articleCrossrefGoogle Scholar

  • Wilhelm, C., Joeckel, S., & Ziegler, I. (2020). Reporting hate comments: Investigating the effects of deviance characteristics, neutralization strategies, and users’ moral orientation. Communication Research, 47(6), 921–944. https://doi.org/10.1177/0093650219855330 First citation in articleCrossrefGoogle Scholar

  • Winiewsky, M., Hanse, K., Bilewicz, M., Soral, W., Swiderska, A., & Bulska, D. (2016). Contempt speech, hate speech – Report from research on verbal violence against minority groups. http://www.ngofund.org.pl/wp-content/uploads/2017/02/Contempt_Speech_Hate_Speech_Full_Report.pdf First citation in articleGoogle Scholar

  • Yamamoto, M. (2011). Community newspaper use promotes social cohesion. Newspaper Research Journal, 32(1), 19–33. https://doi.org/10.1177/073953291103200103 First citation in articleCrossrefGoogle Scholar

  • Zerback, T., & Fawzi, N. (2016). Can online exemplars trigger a spiral of silence? Examining the effects of exemplar opinions on perceptions of public opinion and speaking out. New Media & Society, 19(7), 1034–1051. https://doi.org/10.1177/1461444815625942 First citation in articleCrossrefGoogle Scholar

  • Zerback, T., & Peter, C. (2018). Exemplar effects on public opinion perception and attitudes: The moderating role of exemplar involvement. Human Communication Research, 44(2), 176–196. https://doi.org/10.1093/hcr/hqx007 First citation in articleCrossrefGoogle Scholar

  • Zick, A., & Küpper, B. (2012). Zusammenhalt durch Ausgrenzung? Wie die Klage über den Zerfall der Gesellschaft und die Vorstellung von kultureller Homogenität mit gruppenbezogener Menschenfeindlichkeit zusammenhängen [Cohesion through exclusion? How the complaint about collapse of society and the idea of cultural homogeneity is related to group-related misanthropy]. In W. HeitmeyerEd., Deutsche Zustände, Folge 10, (pp. 152–176). Suhrkamp. First citation in articleGoogle Scholar

  • Ziegele, M., Koehler, C., & Weber, M. (2018). Socially destructive? Effects of negative and hateful user comments on readers’ donation behavior toward refugees and homeless persons. Journal of Broadcasting & Electronic Media, 62(4), 636–653. https://doi.org/10.1080/08838151.2018.1532430 First citation in articleCrossrefGoogle Scholar

  • Zillmann, D., & Brosius, H.-B. (2000). Exemplification in communication: The influence of case reports on the perception of issues. Erlbaum. First citation in articleGoogle Scholar