Abstract
Machine learning (ML) algorithms are increasingly used in high-stake domains like healthcare. While ML systems frequently outperform humans in specific tasks, ensuring safety and transparency is critical in these domains. Interpretability, therefore, plays a crucial role in understanding the decision-making process, auditing and correction of ML models and establishing trust. Furthermore, there is a growing demand for automated machine learning (AutoML) to facilitate model development without expert intervention. However, the combination of interpretability and AutoML has received limited attention
thus far. In this study, we propose two objective model-agnostic measures of interpretability to quantify model compactness and explanation stability, embedded within an automated interpretable ML pipeline.
We experiment with a set of interpretable models on medical classification tasks reporting the proposed measures along with the predictive performances. We further conduct a user study with domain experts to evaluate the correlation between these measures and the subjective concept of interpretability. Our
findings demonstrate the effectiveness of the proposed measures, affirming their success and validating their utility in creating an interpretable automated pipeline.
thus far. In this study, we propose two objective model-agnostic measures of interpretability to quantify model compactness and explanation stability, embedded within an automated interpretable ML pipeline.
We experiment with a set of interpretable models on medical classification tasks reporting the proposed measures along with the predictive performances. We further conduct a user study with domain experts to evaluate the correlation between these measures and the subjective concept of interpretability. Our
findings demonstrate the effectiveness of the proposed measures, affirming their success and validating their utility in creating an interpretable automated pipeline.
| Original language | English |
|---|---|
| Title of host publication | xAI-2023 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings |
| Subtitle of host publication | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) |
| Editors | Luca Longo |
| Publisher | CEUR WS |
| Pages | 18-23 |
| Number of pages | 6 |
| Volume | 3554 |
| Publication status | Published - 20 Nov 2023 |
| Event | The 1st World Conference on eXplainable Artificial Intelligence - Lisbon, Lisboa, Portugal Duration: 26 Jul 2022 → 28 Jul 2022 Conference number: 1 https://xaiworldconference.com/2023/ |
Publication series
| Name | CEUR Workshop Proceedings |
|---|---|
| ISSN (Print) | 1613-0073 |
Conference
| Conference | The 1st World Conference on eXplainable Artificial Intelligence |
|---|---|
| Abbreviated title | xAI 2023 |
| Country/Territory | Portugal |
| City | Lisboa |
| Period | 26/07/22 → 28/07/22 |
| Internet address |
Bibliographical note
Publisher Copyright:© 2023 CEUR-WS. All rights reserved.
Funding
The research leading to this publication is conducted by T. Haagen while she was pursuing MSc thesis internship at InfoSupport company on the Atalmedial dataset.
Keywords
- Automated Machine Learning (AutoML)
- Interpretability measures
- Interpretable automated pipeline
- Machine Learning for health-care
- Model-agnostic measures