The challenges of first-and second-order belief reasoning in explainable human-robot interaction

Sam Thellman, Maartje de Graaf

Research output: Contribution to conferencePaperAcademic

Abstract

Current approaches to implement eXplainable Autonomous Robots (XAR) are dominantly based on Reinforcement Learning (RL), which are suitable for modelling and correcting people’s first-order mental state attributions to robots. Our recent findings show that people also rely on attributing second-order beliefs (i.e., beliefs about beliefs) to robots to interpret their behavior. However, robots arguably form and act primarily on first-order beliefs and desires (about things in the environment) and do not have a functional “theory of mind”. Moreover, RL models may be incapable to appropriately address second-order belief attribution errors. This paper aims to open a discussion of what our recent findings on second-order mental state attribution to robots imply for current approaches to XAR.
Original languageEnglish
Number of pages4
Publication statusPublished - Jun 2023
EventICRA2023 Workshop on Explainable Robotics -
Duration: 29 May 2023 → …

Workshop

WorkshopICRA2023 Workshop on Explainable Robotics
Period29/05/23 → …

Keywords

  • Mind attribution
  • Explainability
  • Folk psychology
  • Social Cognition
  • False-belief task

Fingerprint

Dive into the research topics of 'The challenges of first-and second-order belief reasoning in explainable human-robot interaction'. Together they form a unique fingerprint.

Cite this