Individual Differences in Risk Perception of Artificial Intelligence
Abstract
Abstract. This cross-sectional study (N = 325) investigated the relationship between the Dark Triad personality traits and the perception of artificial intelligence (AI) risk. Narrow AI risk perception was measured based on recently identified perceived risks in the public. Artificial general intelligence (AGI) risk perception was operationalized in terms of plausibility ratings and subjective probability estimates on deceptive AI scenarios developed by Bostrom (2014), in which AI-sided deception is described as a function of intelligence. Machiavellianism and psychopathy predicted narrow AI risk perception above the shared variance of the Dark Triad and above the Big Five. In individuals with self-reported knowledge of machine learning, the Dark Triad traits were associated with AGI risk perception. This study provides evidence for the existence of substantial individual differences in the risk perception of AI.
References
2010). Investigating theory of mind deficits in nonclinical psychopathy and Machiavellianism. Personality and Individual Differences, 49, 169–174. doi 10.1016/j.paid.2010.03.027
(2016). Concrete problems in AI safety. arXiv preprint, arXiv:1606.06565v2.
(2016). Racing to the precipice: A model of artificial intelligence development. AI & Society, 31, 201–206. doi 10.1007/s00146-015-0590-y
(2015). 12 risks that threaten human civilization. Global challenges foundation.. Retrieved from http://www.oxfordmartin.ox.ac.uk/publications/view/1881
(2012). Thinking inside the box: Controlling and using an oracle AI. Minds and Machines, 22, 299–324. doi 10.1007/s11023-012-9282-2
(2017). Guidelines for artificial intelligence containment. Retrieved from https:// arxiv.org/abs/1707.08476
(1998). Los Cinco Grandes across cultures and ethnic groups: Multitrait multimethod analyses of the Big Five in Spanish and English. Journal of Personality and Social Psychology, 75, 729–750. doi 10.1037//0022-3514.75.3.729
(2008). Individual differences in judging deception: Accuracy and bias. Psychological Bulletin, 134, 477–492. doi 10.1037/0033-2909.134.4.477
(1988). The evolution of deception. Journal of Nonverbal Behavior, 12, 295–307. doi 10.1007/BF00987597
(2014). Superintelligence: Paths, dangers, strategies. Oxford, UK: Oxford University Press.
(2015). The manipulative skill: Cognitive devices and their neural correlates underlying Machiavellian’s decision making. Brain and Cognition, 99, 24–31. doi 10.1016/j.bandc.2015.06.007
(2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint, arXiv:1802.07228v1.
(1996). Machiavellian intelligence. Evolutionary Anthropology, 5, 172–180. doi 10.1002/(SICI)1520-6505(1996)5: 5<172::AID-EVAN6>3.0.CO;2-H
(2004). Neocortex size predicts deception rate in primates. Proceedings of the Royal Society B – Biological Sciences, 271, 1693–1699. doi 10.1098/rspb.2004.2780
(1996). Future time perspective scale. Unpublished manuscript, Stanford University..
(1970). Studies in machiavellianism. New York: Academic Press.
(2010). Anthropic shadow: Observation selection effects and human extinction risks. Risk Analysis, 30, 1495–1506. doi 10.1111/j.1539-6924.2010.01460.x
(1992). Revised NEO personality inventory (NEO PI-R). Odessa, FL: Psychological Assessment Resources.
(1979). Arms races between and within species. Proceedings of the Royal Society B – Biological Sciences, 205, 489–511. doi 10.1098/rspb.1979.0081
(2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114, 864–886. doi 10.1037/0033-295X.114.4.864
(2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149–1160. doi 10.3758/BRM.41.4.1149
(2002). Theoretical and empirical comparison between two models for continuous item responses. Multivariate Behavioral Research, 37, 521–542. doi 10.1207/S15327906MBR3704_05
(2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. doi 10.1016/j.techfore.2016.08.019
(2013). The relation between antisocial personality and the perceived ability to deceive. Personality and Individual Differences, 54, 246–250. doi 10.1016/j.paid.2012.09.004
(2017). When will AI exceed human performance? Evidence from AI experts. arXiv preprint, arXiv:1705.08807v2.
(2017). Neuroscience-inspired artificial intelligence. Neuron, 95, 245–258. doi 10.1016/j.neuron.2017.06.011
(1999).
(The Big Five trait taxonomy: History, measurement, and theoretical perspectives . In L. A. PervinO. P. JohnEds., Handbook of personality: Theory and research (2nd ed., pp. 102–138). New York: Guilford.2014). What a tangled web we weave: The Dark Triad and deception. Personality and Individual Differences, 70, 117–119. doi 10.1016/j.paid.2014.06.038
(2013). The core of darkness: Uncovering the heart of the Dark Triad. European Journal of Personality, 27, 521–531. doi 10.1002/per.1893
(2009).
(Machiavellianism . In M. R. LearyR. H. HoyleEds., Handbook of individual differences in social behavior (pp. 93–108). New York: Guilford.2014). Introducing the Short Dark Triad (SD3): A brief measure of dark personality traits. Assessment, 21, 28–41. doi 10.1177/1073191113514105
(2016). The role of Machiavellian views and tactics in psychopathology. Personality and Individual Differences, 94, 72–81. doi 10.1016/j.paid.2016.01.002
(2007). Timescale bias in the attribution of mind. Journal of Personality and Social Psychology, 93, 1–11. doi 10.1037/0022-3514.93.1.1
(2016).
(Future progress in artificial intelligence: A survey of expert opinion . In V. C. MüllerEd., Fundamental issues of artificial intelligence (pp. 555–572). Berlin, Germany: Springer. doi 10.1007/978-3-319-26485-1_332008). Lie Acceptability: A construct and measure. Communication Research Reports, 25, 282–288. doi 10.1080/08824090802440170
(2008).
(The basic AI drives . In P. WangB. GoertzelS. FranklinEds., Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications, Volume 171. Clifton, VA: IOS Press.2000). Likelihood-based item fit indices for dichotomous item response theory models. Applied Psychological Measurement, 24, 50–64. doi 10.1177/01466216000241003
(2002). The Dark Triad of personality: Narcissism, Machiavellianism, and psychopathy. Journal of Research in Personality, 36, 556–563. doi 10.1016/S0092-6566(02)00505-6
(2016). The evolutionary genetics of personality revisited. Current Opinion in Psychology, 7, 104–109. doi 10.1016/j.copsyc.2015.08.021
(1986). Development, genetics, and psychology. Hillsdale, NJ: Erlbaum.
(2017). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
(1973). Homogeneous case of the continuous response model. Psychometrika, 38, 203–219. doi 10.1007/BF02291114
(2005). A noniterative item parameter solution in each EM cycle of the continuous response model. Educational Technology Research, 28, 11–22. doi 10.15077/etr.KJ00003899231
(2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529, 484–489. doi 10.1038/nature16961
(2017). Mastering the game of Go without human knowledge. Nature, 550, 354–359. doi 10.1038/nature24270
(2017). Public views of machine learning: Findings from public research and engagement conducted on behalf of the Royal Society. Retrieved from https://royalsociety.org/~/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdf
(2017). Validity and Mechanical Turk: An assessment of exclusion methods and interactive experiments. Computers in Human Behavior, 77, 184–197. doi 10.1016/j.chb.2017.08.038
(2006). Toward a clarification of probability, possibility and plausibility: How semantics could help futures practice to improve. Foresight, 8, 17–27. doi 10.1108/14636680610668045
(2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34, 1–16. doi 10.1017/S0140525X10001354
(2015). Mindreading in the dark: Dark personality features and theory of mind. Personality and Individual Differences, 87, 50–54. doi 10.1016/j.paid.2015.07.025
(2013). Causal entropic forces. Physical Review Letters, 110, 168702. doi 10.1103/PhysRevLett.110.168702
(2011). Two mechanisms for simulating other minds: Dissociations between mirroring and self-projection. Current Directions in Psychological Science, 20, 197–200. doi 10.1177/0963721411409007
(2014). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5, 219–232. doi 10.1177/1745691610369336
(2012). Leakproofing the singularity: Artificial intelligence confinement problem. Journal of Consciousness Studies, 19, 194–214.
(2002). The AI-Box Experiment. Retrieved from http://yudkowsky.net/singularity/aibox
(