Abstract
To make Autonomous Intelligent Systems (AIS), such as virtual agents and embodied robots, "explainable" we need to understand how people respond to such systems and what expectations they have of them. Our thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them. We present a wellsupported theory of how people explain human behavior and sketch what it would take to implement the underlying framework of explanation in AIS. The benefits will be considerable: When an AIS is able to explain its behavior in ways that people find comprehensible, people are more likely to form correct mental models of such a system and calibrate their trust in the system.
Original language | English |
---|---|
Title of host publication | FS-17-01 |
Subtitle of host publication | Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind |
Publisher | AI Access Foundation |
Pages | 19-26 |
Number of pages | 8 |
Volume | FS-17-01 - FS-17-05 |
ISBN (Electronic) | 9781577357940 |
Publication status | Published - 1 Jan 2017 |
Event | 2017 AAAI Fall Symposium - Arlington, United States Duration: 9 Nov 2017 → 11 Nov 2017 |
Conference
Conference | 2017 AAAI Fall Symposium |
---|---|
Country/Territory | United States |
City | Arlington |
Period | 9/11/17 → 11/11/17 |