Skip to main content
Free AccessEditorial

Factors Guiding Moral Judgment, Reason, Decision, and Action

Published Online:https://doi.org/10.1027/1618-3169/a000360

The field of moral psychology has become increasingly popular in recent years (cf. Bonnefon & Trémolière, 2017; Cohen Priva, & Austerweil, 2015; Greene, 2015; Schleim, 2015). While it was a rather exotic part of psychology during the 1980s and 1990s, interest in moral cognition has since skyrocketed, starting with the seminal and highly cited papers from Greene, Sommerville, Nystrom, Darley, and Cohen (2001) and Haidt (2001). A characteristic feature of this research is its interdisciplinarity (cf. Waldmann, Nagel, & Wiegmann, 2012). This work has stimulated interest in cognitive, social, and developmental psychologists, as well as neuroscientists, experimental philosophers, evolutionary biologists, and anthropologists; all of whom have sought to make contributions toward understanding moral behavior and moral judgments.

Since these seminal papers, researchers in the field of moral psychology have presented a multitude of studies aiming to show how people make moral judgments, and how the process of moral judgment is intertwined with other cognitive processes. For instance, researchers have provided striking evidence that moral cognition is associated with causal cognition (Hitchcock & Knobe, 2009; Knobe, 2003), counterfactual reasoning (Kominsky, Phillips, Gerstenberg, Lagnado, & Knobe, 2015), judgments about whether the agent acted freely (Driver, 2008), and intentionality (Astuti & Bloch, 2015; Hindriks, 2011; Knobe, 2003; Machery, 2008).

Despite wide-ranging work on moral psychology and its connections to other cognitive processes, little is known about whether moral cognition describes cognitive processes that are unique to solving moral issues. Put succinctly: Are there distinct psychological processes at work when it comes to moral behavior and judgments, or is moral cognition no different to forms of cognition found in other social contexts? Are there special brain areas devoted to moral questions, or is moral cognition underpinned by neurological processes that are recruited when making judgments/decisions/inferences in social or even economic contexts? Might there even be an innate and universal moral module (Mikhail, 2011), or is moral cognition domain general? Although this special issue on moral agency is not explicitly directed toward answering all of these questions, the contributions to this special issue help to shed considerable light on addressing these questions.

For instance, in their theoretical article “Explaining moral behavior: A minimal moral model,” Osman and Wiegmann (2017) take a clear stance on the potential uniqueness of what moral psychology investigates. They argue that moral situations per se do not require a specialized toolbox designed for moral problems. In contrast, in order to figure out how to resolve moral dilemmas, judgment, reasoning, and decision-making processes are no different to cognitive processes that are recruited in what some might construe as nonmoral contexts (e.g., economic contexts, social contexts, causal contexts). Consequently, researchers who aim to improve and develop existing theories of moral cognition, providing computational models and general frameworks for understanding moral psychology, should build on domain-general principles from current reasoning, judgment, and decision-making research. To support their view Osman and Wiegmann show that using a simple, domain general value-based decision model can extend to describing and predicting a range of core moral behaviors.

In a similar vein, Powell and Horne’s empirical paper “Moral severity is represented as a domain-general magnitude” (2017) presents a similar thesis. They investigate how the severity of moral transgressions is psychologically represented by measuring participants’ response times. These response times exhibited two signatures of domain-general magnitude comparisons, suggesting that moral severity is represented in a similar fashion as other continuous magnitudes and, therefore, not represented in a unique domain-specific way.

In their empirical article “Scale effects in moral relevance judgment: How implicit presuppositions affect expressed judgments,” Nagel and Rybak (2017) test how different response scales can affect moral relevance judgments. They found that these judgments can be qualitatively affected by varying the numbers of response options provided (odd vs. even). Based on these observations they conclude that that expressed moral judgments are constructed ad hoc and do not necessarily reflect the content of underlying stable moral commitments, thereby resembling expressed preferences in other decision and judgment making fields.

In their empirical article “Moral Hindsight,” Fleischhut, Meder, and Gigerenzer (2017) investigate whether the well-known hindsight effect can also be found in the moral domain. They show that participants for whom the occurrence of negative side effects was uncertain, judged actions to be morally more permissible than participants who were aware that negative side effects had occurred. Hence, their unique findings extend hindsight effects in the retrospective evaluation of judgments and decisions to the moral domain.

Liao’s theoretical article “Neuroscience and ethics: Assessing Greene’s epistemic debunking argument against deontology” (2017) considers the implications of empirical findings for normative questions that have been posed in the moral domain. By linking deontological and consequentialist moral judgments to domain-general emotional and deliberative processes, respectively, Greene argued for consequentialist moral theories as the normative superior ones. Matthew Liao argues why several of Greene’s arguments fall short in undermining deontological judgments and neuroimaging results, and why they may in fact call into question the reliability of consequentialist judgments; which imply that moral judgments are made on the basis of the consequences of acts rather than the intentions of the moral agent.

In “The intention-outcome asymmetry effect: How incongruent intentions and outcomes influence judgments of responsibility and causality,” Sarin, Lagnado, and Burgess (2017) identify a novel asymmetry in people’s judgments of causality, responsibility, and blame. When intentions are incongruent with outcomes, people assign greater responsibility, greater causality, and greater blame to an agent with good intentions who produces a bad outcome than to an agent with bad intentions who produces a good outcome. This asymmetry is explained by additional inferences that people make judgments beyond the information given in the scenarios in order to make sense of the overall story.

We hope that the range of articles in this special issue provides readers with an up to the minute snapshot of the current empirical and theoretical insights that are likely to shape the direction of moral research to come. Moreover, this special is unique in that it brings together researchers from philosophy and psychology that are guided toward a key underpinning idea, and that is that the core processes that underpin moral cognition share critical similarities to patterns of cognition found in a variety of nonmoral contexts.

References

Alex Wiegmann, Cognitive and Decision Sciences, Georg-Elias-Müller Institute of Psychology, University of Göttingen, Goßlerstr. 14, 37073 Göttingen, Germany,
Magda Osman, Queen Mary University London, Biology and Experimental Psychology Centre, Mile End Rd, London E14NS, UK,