Measuring Model Understandability by means of Shapley Additive Explanations

E Mariotti, JM ALonso-Moral, A Gatt

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Abstract

    In this work we link the understandability of machine learning models to the complexity of their SHapley Additive exPlanations (SHAP). Thanks to this reframing we introduce two novel metrics for understandability: SHAP Length and SHAP Interaction Length. These are model-agnostic, efficient, intuitive and theoretically grounded metrics that are anchored in well-established game-theoretic and psychological principles. We show how these metrics resonate with other model-specific ones and how they can enable a fairer comparison of epistemically different models in the context of Explainable Artificial Intelligence. In particular, we quantitatively explore the understandability-performance tradeoff of different models which are applied to both classification and regression problems. Reported results suggest the value of the new metrics in the context of automated machine learning and multi-objective optimisation.
    Original languageEnglish
    Title of host publicationProceedings of the 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
    PublisherIEEE
    Number of pages8
    ISBN (Print)978-1-6654-6710-0
    DOIs
    Publication statusPublished - 2022

    Fingerprint

    Dive into the research topics of 'Measuring Model Understandability by means of Shapley Additive Explanations'. Together they form a unique fingerprint.

    Cite this