Abstract
Predictive Process Monitoring (PPM) often uses deep learning models to predict the future behavior of ongoing processes, such as predicting process outcomes. While these models achieve high accuracy, their lack of interpretability undermines user trust and adoption. Explainable AI (XAI) aims to address this challenge by providing the reasoning behind the predictions. However, current evaluations of XAI in PPM focus primarily on functional metrics (such as fidelity), overlooking user-centered aspects such as their effect on task performance and decision-making. This study investigates the effects of explanation styles (feature importance, rule-based, and counterfactual) and perceived AI accuracy (low or high) on decision-making in PPM. We conducted a decision-making experiment, where users were presented with the AI predictions, perceived accuracy levels, and explanations of different styles. Users’ decisions were measured both before and after receiving explanations, allowing the assessment of objective metrics (Task Performance and Agreement) and subjective metrics (Decision Confidence). Our findings show that perceived accuracy and explanation style have a significant effect.
Original language | English |
---|---|
Title of host publication | Advanced Information Systems Engineering |
Subtitle of host publication | 37th International Conference, CAiSE 2025, Vienna, Austria, June 16–20, 2025, Proceedings, Part II |
Publisher | Springer |
Pages | 39-56 |
Number of pages | 18 |
ISBN (Electronic) | 978-3-031-94571-7 |
ISBN (Print) | 978-3-031-94573-1 |
DOIs | |
Publication status | Published - 15 Jun 2025 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Volume | 15702 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Bibliographical note
Publisher Copyright:© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.