Predicting Measurement Model Misfit With Machine Learning While Accounting for Nuisance Parameters – An Illustration

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Developing valid measurement models for latent variables, such as personality traits, is essential for accurate psychological assessment. A critical aspect of this process is evaluating the fit of psychometric models. However, commonly used model fit indices are often affected by nuisance parameters – such as sample and model size – making the use of conventional cutoff values problematic, as these thresholds are typically based on narrow simulation scenarios. Recently, a machine learning-based approach to model fit evaluation has been introduced by Partsch and Goretzko (2025), offering a more flexible and data-informed alternative. This approach considers not only various indicators of model (mis)fit but also multiple characteristics of the data and model. In this paper, we discuss how nuisance parameters can distort model fit evaluation and present the core principles of this new evaluation strategy. We further demonstrate how interpretable machine learning can reveal the decision-making process of a pretrained predictive model in identifying model misspecification.
Original languageEnglish
Pages (from-to)187-198
JournalPsychological Test Adaptation and Development
Volume6
DOIs
Publication statusPublished - 1 Dec 2025

Fingerprint

Dive into the research topics of 'Predicting Measurement Model Misfit With Machine Learning While Accounting for Nuisance Parameters – An Illustration'. Together they form a unique fingerprint.

Cite this