How people explain action (and autonomous intelligent systems should too)

Maartje M.A. De Graaf, Bertram F. Malle

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

To make Autonomous Intelligent Systems (AIS), such as virtual agents and embodied robots, "explainable" we need to understand how people respond to such systems and what expectations they have of them. Our thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them. We present a wellsupported theory of how people explain human behavior and sketch what it would take to implement the underlying framework of explanation in AIS. The benefits will be considerable: When an AIS is able to explain its behavior in ways that people find comprehensible, people are more likely to form correct mental models of such a system and calibrate their trust in the system.

Original languageEnglish
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Pages19-26
Number of pages8
VolumeFS-17-01 - FS-17-05
ISBN (Electronic)9781577357940
Publication statusPublished - 1 Jan 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: 9 Nov 201711 Nov 2017

Conference

Conference2017 AAAI Fall Symposium
Country/TerritoryUnited States
CityArlington
Period9/11/1711/11/17

Fingerprint

Dive into the research topics of 'How people explain action (and autonomous intelligent systems should too)'. Together they form a unique fingerprint.

Cite this