Constructing Interpretable Prediction Models with 1D DNNs: An Example in Irregular ECG Classification

Giacomo Lancia*, Cristian Spitoni*

*Corresponding author for this work

Research output: Working paperPreprintAcademic

Abstract

This manuscript proposes a novel methodology for developing an interpretable prediction model for irregular Electrocardiogram (ECG) classification, using features extracted by a 1-D Deconvolutional Neural Network (1-D DNN). Given the increasing prevalence of cardiovascular disease, there is a growing demand for models that provide transparent and clinically relevant predictions, which are essential for advancing the development of automated diagnostic tools. The features extracted by the 1-D DNN are included in a simple Logistic Regression (LR) model to predict abnormal ECG patterns. Our analysis demonstrates that the features are consistent with clinical knowledge and provide an interpretable and reliable classification of conditions such as Atrial Fibrillation (AF), Myocardial Infarction (MI), and Sinus Bradycardia Rhythm (SBR). Moreover, our findings show that the simple LR model has similar predictive accuracy to more complex models, such as a 1-D Convolutional Neural Network (1-D CNN), providing a concrete example of how to efficiently integrate Explainable Artificial Intelligence (XAI) methodologies with traditional regression models.
Original languageEnglish
PublisherarXiv
DOIs
Publication statusPublished - 16 Oct 2024

Fingerprint

Dive into the research topics of 'Constructing Interpretable Prediction Models with 1D DNNs: An Example in Irregular ECG Classification'. Together they form a unique fingerprint.

Cite this