Abstract
The clinical notes in electronic health records have many possibilities for predictive tasks in text classification. The interpretability of these classification models for the clinical domain is critical for decision making. Using topic models for text classification of electronic health records for a predictive task allows for the use of topics as features, thus making the text classification more interpretable. However, selecting the most effective topic model is not trivial. In this work, we propose considerations for selecting a suitable topic model based on the predictive performance and interpretability measure for text classification. We compare 17 different topic models in terms of both interpretability and predictive performance in an inpatient violence prediction task using clinical notes. We find no correlation between interpretability and predictive performance. In addition, our results show that although no model outperforms the other models on both variables, our proposed fuzzy topic modeling algorithm (FLSA-W) performs best in most settings for interpretability, whereas two state-of-the-art methods (ProdLDA and LSI) achieve the best predictive performance.
Original language | English |
---|---|
Article number | 846930 |
Pages (from-to) | 1-11 |
Journal | Frontiers in Big Data |
Volume | 5 |
DOIs | |
Publication status | Published - 4 May 2022 |
Bibliographical note
Funding Information:We acknowledge the COmputing VIsits DAta (COVIDA) funding provided by the strategic alliance of TU/e, WUR, UU, and UMC Utrecht.
Publisher Copyright:
Copyright © 2022 Rijcken, Kaymak, Scheepers, Mosteiro, Zervanou and Spruit.
Keywords
- text classification
- topic modeling
- explainability
- interpretability
- electronic health records
- psychiatry
- natural language processing
- information extraction