Explainable AI in healthcare: to explain, to predict, or to describe?

  • Alex Carriero*
  • , Anne de Hond
  • , Bram Cappers
  • , Fernando Paulovich
  • , Sanne Abeln
  • , Karel G. M. Moons
  • , Maarten van Smeden
  • *Corresponding author for this work

Research output: Contribution to journalEditorialAcademicpeer-review

Abstract

Explainable Artificial Intelligence (AI) methods are designed to provide information about how AI-based models make predictions. In healthcare, there is a widespread expectation that these methods will provide relevant and accurate information about a model's inner-workings to different stakeholders (ranging from patients and healthcare providers to AI and medical guideline developers). This is a challenging endeavor since what qualifies as relevant information may differ greatly depending on the stakeholder. For many stakeholders, relevant explanations are causal in nature, yet, explainable AI methods are often not able to deliver this information. Using the Describe-Predict-Explain framework, we argue that Explainable AI methods are good descriptive tools, as they may help to describe how a model works but are limited in their ability to explain why a model works in terms of true underlying biological mechanisms and cause-and-effect relations. This limits the suitability of explainable AI methods to provide actionable advice to patients or to judge the face validity of AI-based models.
Original languageEnglish
Article number29
Pages (from-to)1-8
Number of pages8
JournalDiagnostic and prognostic research
Volume9
Issue number1
DOIs
Publication statusPublished - 5 Dec 2025

Keywords

  • Models

Fingerprint

Dive into the research topics of 'Explainable AI in healthcare: to explain, to predict, or to describe?'. Together they form a unique fingerprint.

Cite this