Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations

Federico Maria Cau, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI – rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users’ reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI’s prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI.

Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data.
Original languageEnglish
Title of host publicationIUI '23
Subtitle of host publicationProceedings of the 28th International Conference on Intelligent User Interfaces
PublisherAssociation for Computing Machinery
Pages251-263
Number of pages13
ISBN (Electronic)9798400701061
ISBN (Print)979-8-4007-0106-1
DOIs
Publication statusPublished - 27 Mar 2023

Bibliographical note

Funding Information:
Federico Maria Cau gratefully acknowledges the “CRS4 - Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna” for the PhD funding and the collaboration on the RIALE (Remote Intelligent Access to Lab Experiment) Platform. The work has been partially supported by the Sardinia Regional Government and by Fondazione di Sardegna, ASTRID project (FdS 2020) - CUP F75F21001220007.

Publisher Copyright:
© 2023 ACM.

Keywords

  • XAI
  • AI confidence
  • Logical reasoning
  • Inductive
  • Deductive
  • Abductive
  • Random forest
  • Stock market prediction

Fingerprint

Dive into the research topics of 'Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations'. Together they form a unique fingerprint.

Cite this