"Human and Machine Morality"
Dr. Bertram Malle, Brown University
Monday, September 26th at 5:30 p.m., 118 Psychology
About Dr. Malle
Bertram F. Malle is Professor in the Department of Cognitive, Linguistic, and Psychological Sciences
at Brown University and Co-Director of the Humanity-Centered Robotics Initiative at Brown.
He was trained in psychology, philosophy, and linguistics at the University of Graz, Austria,
and received his Ph.D. in psychology from Stanford University in 1995. He received the
Society of Experimental Social Psychology Outstanding Dissertation award in 1995, a
National Science Foundation (NSF) CAREER award in 1997, and he is past president of the Society of Philosophy and Psychology.
Malle’s research, which has been funded by the NSF, Army, Templeton Foundation, Office of Naval Research, and DARPA,
focuses on social cognition, moral psychology, and human-robot interaction.
Abstract
Machine cognition has long been one of the core topics of cognitive science. Today, we celebrate feats of machine cognition that long eluded us: natural language processing, object perception, and real-world autonomous motion. Now we face a new challenge: machine morality. Artificial agents are entering society in domains such as medicine, education, and the law—all teeming with morally significant decisions. How do people respond to such moral machines, and how could and should we design these machines to make the right moral decisions? Only the integration of psychology, computer science, and ethics can address these challenging questions. I will report here on our research that examines both the human side of morality—the nature of people’s moral judgments and decisions—and the prospects for machine morality—what it would mean, and take, to build a machine with moral competence. Along the way I highlight the defining role of norms in morality, the sophistication of human moral judgments, and the central impact of justifications. Despite many unanswered questions, moral cognitive science has matured to the point that we can plan how to build a moral machine; whether we will, and should, succeed is not yet known.
Suggested Reading
Malle, B. F., & Scheutz, M. (2018). Learning How to Behave: Moral Competence for Social Robots. In Handbuch Maschinenethik, 1-24. [.pdf]
Malle, B. F., Magar, S. T., & Scheutz, M. (2019). AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In Robotics and Well-Being (pp. 111-133). Springer, Cham. [.pdf]
Malle, B. F. (2021). Moral Judgments. In Annual Review of Psychology, 72, 293-318. [.pdf]