TY - JOUR
T1 - TOMA
T2 - Computational Theory of Mind with Abstractions for Hybrid Intelligence
AU - Erdogan, Emre
AU - Dignum, Frank
AU - Verbrugge, Rineke
AU - Yolum, Pınar
N1 - Publisher Copyright:
©2025 The Authors.
PY - 2025
Y1 - 2025
N2 - Theory of mind refers to the human ability to reason about the mental content of other people, such as their beliefs, desires, and goals. People use their theory of mind to understand, reason about, and explain the behaviour of others. Having a theory of mind is especially useful when people collaborate, since individuals can then reason on what the other individual knows as well as what reasoning they might do. Similarly, hybrid intelligence systems, where AI agents collaborate with humans, necessitate that the agents reason about the humans using computational theory of mind. However, to try to keep track of all individual mental attitudes of all other individuals becomes (computationally) very difficult. Accordingly, this paper provides a mechanism for computational theory of mind based on abstractions of single beliefs into higher-level concepts. These abstractions can be triggered by social norms and roles. Their use in decision making serves as a heuristic to choose among interactions, thus facilitating collaboration. We provide a formalization based on epistemic logic to explain how various inferences enable such a computational theory of mind. Using examples from the medical domain, we demonstrate how having such a theory of mind enables an agent to interact with humans effectively and can increase the quality of the decisions humans make.
AB - Theory of mind refers to the human ability to reason about the mental content of other people, such as their beliefs, desires, and goals. People use their theory of mind to understand, reason about, and explain the behaviour of others. Having a theory of mind is especially useful when people collaborate, since individuals can then reason on what the other individual knows as well as what reasoning they might do. Similarly, hybrid intelligence systems, where AI agents collaborate with humans, necessitate that the agents reason about the humans using computational theory of mind. However, to try to keep track of all individual mental attitudes of all other individuals becomes (computationally) very difficult. Accordingly, this paper provides a mechanism for computational theory of mind based on abstractions of single beliefs into higher-level concepts. These abstractions can be triggered by social norms and roles. Their use in decision making serves as a heuristic to choose among interactions, thus facilitating collaboration. We provide a formalization based on epistemic logic to explain how various inferences enable such a computational theory of mind. Using examples from the medical domain, we demonstrate how having such a theory of mind enables an agent to interact with humans effectively and can increase the quality of the decisions humans make.
UR - http://www.scopus.com/inward/record.url?scp=85219539970&partnerID=8YFLogxK
U2 - 10.1613/jair.1.16402
DO - 10.1613/jair.1.16402
M3 - Article
AN - SCOPUS:85219539970
SN - 1076-9757
VL - 82
SP - 285
EP - 311
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
ER -