Abstract
The increasing complexity of robotic systems are pressing the need for them to be transparent and trustworthy. When people interact with a robotic system, they will inevitably construct mental models to understand and predict its actions. However, peoples mental models of robotic systems stem from their interactions with living beings, which induces the risk of establishing incorrect or inadequate mental models of robotic systems and may lead people to either under- and over-trust these systems. We need to understand the inferences that people make about robots from their behavior, and leverage this understanding to formulate and implement behaviors into robotic systems that support the formation of correct mental models of and fosters trust calibration. This way, people will be better able to predict the intentions of these systems, and thus more accurately estimate their capabilities, better understand their actions, and potentially correct their errors. The aim of this full-day workshop is to provide a forum for researchers and practitioners to share and learn about recent research on peoples inferences of robot actions, as well as the implementation of transparent, predictable, and explainable behaviors into robotic systems.
Original language | English |
---|---|
Title of host publication | HRI 2018 - Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction |
Publisher | IEEE |
Pages | 387-388 |
Number of pages | 2 |
ISBN (Electronic) | 9781450356152 |
DOIs | |
Publication status | Published - 1 Mar 2018 |
Event | 13th Annual ACM/IEEE International Conference on Human Robot Interaction, HRI 2018 - Chicago, United States Duration: 5 Mar 2018 → 8 Mar 2018 |
Conference
Conference | 13th Annual ACM/IEEE International Conference on Human Robot Interaction, HRI 2018 |
---|---|
Country/Territory | United States |
City | Chicago |
Period | 5/03/18 → 8/03/18 |
Keywords
- behavior explanation
- explainable robotics
- intentionality
- theory of mind
- transparency
- trust calibration.