TY - JOUR
T1 - Topic Specificity: a Descriptive Metric for Algorithm Selection and Finding the Right Number of Topics
AU - Rijcken, Emil
AU - Zervanou, Kalliopi
AU - Mosteiro Romero, Pablo
AU - Scheepers, F.E.
AU - Spruit, Marco
AU - Kaymak, Uzay
PY - 2024/9
Y1 - 2024/9
N2 - Topic modeling is a prevalent task for discovering the latent structure of a corpus, identifying a set of topics that represent the underlying themes of the documents. Despite its popularity, issues with its evaluation metric, the coherence score, result in two common challenges: algorithm selection and determining the number of topics. To address these two issues, we propose the topic specificity metric, which captures the relative frequency of topic words in the corpus and is used as a proxy for the specificity of a word. In this work, we formulate the metric firstly. Secondly, we demonstrate that algorithms train topics at different specificity levels. This insight can be used to address algorithm selection as it allows users to distinguish and select algorithms with the desired specificity level. Lastly, we show a strictly positive monotonic correlation between the topic specificity and the number of topics for LDA, FLSA-W, NMF and LSI. This correlation can be used to address the selection of the number of topics, as it allows users to adjust the number of topics to their desired level. Moreover, our descriptive metric provides a new perspective to characterize topic models, allowing them to be understood better.
AB - Topic modeling is a prevalent task for discovering the latent structure of a corpus, identifying a set of topics that represent the underlying themes of the documents. Despite its popularity, issues with its evaluation metric, the coherence score, result in two common challenges: algorithm selection and determining the number of topics. To address these two issues, we propose the topic specificity metric, which captures the relative frequency of topic words in the corpus and is used as a proxy for the specificity of a word. In this work, we formulate the metric firstly. Secondly, we demonstrate that algorithms train topics at different specificity levels. This insight can be used to address algorithm selection as it allows users to distinguish and select algorithms with the desired specificity level. Lastly, we show a strictly positive monotonic correlation between the topic specificity and the number of topics for LDA, FLSA-W, NMF and LSI. This correlation can be used to address the selection of the number of topics, as it allows users to adjust the number of topics to their desired level. Moreover, our descriptive metric provides a new perspective to characterize topic models, allowing them to be understood better.
KW - Topic modeling
KW - Metric
KW - algorithm selection
U2 - 10.1016/j.nlp.2024.100082
DO - 10.1016/j.nlp.2024.100082
M3 - Article
SN - 2949-7191
VL - 8
JO - Natural Language Processing
JF - Natural Language Processing
M1 - 100082
ER -