Exploring the Balance between Interpretability and Performance with carefully designed Constrainable Neural Additive Models

  • E Mariotti
  • , Jose M. Alonso
  • , A Gatt

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

The interpretability of an intelligent model automatically derived from data is a property that can be acted upon with a set of structural constraints that such a model should adhere to. Often these are in contrast with the task objective and it is not straightforward how to explore the balance between model interpretability and performance. In order to allow an interested user to jointly optimise performance and interpretability, we propose a new formulation of Neural Additive Models (NAM) which can be subject to a number of constraints. Accordingly, our approach produces a new model that is called Constrainable NAM (or just CNAM in short) and it allows the specification of different regularisation terms. CNAM is differentiable and is built in such a way that it can be initialised as a solution of an efficient tree-based GAM solver (e.g., Explainable Boosting Machines). From this local optimum the model can then explore solutions with different interpretability-performance tradeoffs according to different definitions of both interpretability and performance. We empirically benchmark the model on 56 datasets against 12 models and observe that on average the proposed CNAM model ranks on the Pareto front of optimal solutions, i.e., models generated by CNAM exhibit a good balance between interpretability and performance. Moreover, we provide two illustrative examples which are aimed to show step by step how CNAM works well for solving classification tasks, but also how it can yield insights when considering regression tasks.
Original languageEnglish
Article number101882
Pages (from-to)1-14
Number of pages14
JournalInformation Fusion
Volume99
DOIs
Publication statusPublished - Nov 2023

Bibliographical note

Funding Information:
This work is conducted within the NL4XAI project which has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621. This work was also supported by the Spanish Ministry of Science, Innovation and Universities (grants PID2021-123152OB-C21, TED2021-130295B-C33 and RED2022-134315-T) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431G2019/04 and ED431C2022/19). All grants were co-funded by the European Regional Development Fund (ERDF/FEDER program).

Funding Information:
This work is conducted within the NL4XAI project which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621 . This work was also supported by the Spanish Ministry of Science, Innovation and Universities (grants PID2021-123152OB-C21 , TED2021-130295B-C33 and RED2022-134315-T ) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431G2019/04 and ED431C2022/19 ). All grants were co-funded by the European Regional Development Fund (ERDF/FEDER program) .

Publisher Copyright:
© 2023 The Author(s)

Funding

FundersFunder number
Galician Ministry of CultureED431G2019/04, ED431C2022/19
Horizon 2020 Framework Programme
H2020 Marie Skłodowska-Curie Actions860621
Ministerio de Ciencia, Innovación y UniversidadesPID2021-123152OB-C21, TED2021-130295B-C33, RED2022-134315-T
Federación Española de Enfermedades Raras
Horizon 2020
European Regional Development Fund

    Keywords

    • Generalised additive models
    • Explainable Artificial Intelligence
    • Interpretable modelling
    • Neural additive models
    • Interpretability
    • Explainability

    Fingerprint

    Dive into the research topics of 'Exploring the Balance between Interpretability and Performance with carefully designed Constrainable Neural Additive Models'. Together they form a unique fingerprint.

    Cite this