On Human Moral Evaluation of Robot-Robot-Interaction: Is it wrong when a robot hits a robot?

Jan-Philip van Acken, Pim Haselager, Luca Consoli

    Research output: Other contributionAcademic

    Abstract

    When observing robots interact the question arises whether or not even these observations are made in terms of moral judgments. Ways to enable robots to behave morally are discussed. One way to describe moral actions is the Moral Foundations Theory, where moral is broken down along several dimensions. We had 262 students participate in a web-based study, asking them to look at 11 movies of robots interacting and then give their level of agreement concerning moral dimensions. We found trends in the data suggesting that participants rated the robot-robot interaction in moral terms and picked up on manipulations alongside several moral dimensions. This implies that even robot-robot interaction is viewed as if the robots adhered to human systems of morality.
    Original languageEnglish
    Number of pages80
    Publication statusPublished - 21 Mar 2017

    Fingerprint

    Dive into the research topics of 'On Human Moral Evaluation of Robot-Robot-Interaction: Is it wrong when a robot hits a robot?'. Together they form a unique fingerprint.

    Cite this