Abstract
Current approaches to implement eXplainable Autonomous Robots (XAR) are dominantly based on Reinforcement Learning (RL), which are suitable for modelling and correcting people’s first-order mental state attributions to robots. Our recent findings show that people also rely on attributing second-order beliefs (i.e., beliefs about beliefs) to robots to interpret their behavior. However, robots arguably form and act primarily on first-order beliefs and desires (about things in the environment) and do not have a functional “theory of mind”. Moreover, RL models may be incapable to appropriately address second-order belief attribution errors. This paper aims to open a discussion of what our recent findings on second-order mental state attribution to robots imply for current approaches to XAR.
Original language | English |
---|---|
Number of pages | 4 |
Publication status | Published - Jun 2023 |
Event | ICRA2023 Workshop on Explainable Robotics - Duration: 29 May 2023 → … |
Workshop
Workshop | ICRA2023 Workshop on Explainable Robotics |
---|---|
Period | 29/05/23 → … |
Keywords
- Mind attribution
- Explainability
- Folk psychology
- Social Cognition
- False-belief task