Abstract
Empowering artificially intelligent agents with capabilities that humans use regularly is crucial to enable effective human-agent collaboration. One of these crucial capabilities is the modeling of Theory of Mind (ToM) reasoning: the human ability to reason about the mental content of others such as their beliefs, desires, and goals. However, it is generally impractical to track all individual mental attitudes of all other individuals and for many practical situations not even necessary. Hence, what is important is to capture enough information to create an approximate model that is effective and flexible. Accordingly, this paper proposes a computational ToM mechanism based on abstracting beliefs and knowledge into higher-level human concepts, called abstractions, similar to the ones that guide humans to effectively interact with each other (e.g., trust). We develop an agent architecture based on epistemic logic to formalize the computational dynamics of ToM reasoning. We identify important challenges regarding effective maintenance of abstractions and accurate use of ToM reasoning and demonstrate how our approach addresses these challenges over multiagent simulations.
Original language | English |
---|---|
Pages (from-to) | 2249-2251 |
Number of pages | 3 |
Journal | Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS |
Volume | 2024 |
Issue number | May |
DOIs | |
Publication status | Published - 6 May 2024 |
Event | 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024 - Auckland, New Zealand Duration: 6 May 2024 → 10 May 2024 |
Bibliographical note
Publisher Copyright:© 2024 International Foundation for Autonomous Agents and Multiagent Systems.
Keywords
- Abstraction
- Human-AI Collaboration
- Theory of Mind