Toward a standardized evaluation of imputation methodology

Hanne I. Oberman*, Gerko Vink

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

Developing new imputation methodology has become a very active field. Unfortunately, there is no consensus on how to perform simulation studies to evaluate the properties of imputation methods. In part, this may be due to different aims between fields and studies. For example, when evaluating imputation techniques aimed at prediction, different aims may be formulated than when statistical inference is of interest. The lack of consensus may also stem from different personal preferences or scientific backgrounds. All in all, the lack of common ground in evaluating imputation methodology may lead to suboptimal use in practice. In this paper, we propose a move toward a standardized evaluation of imputation methodology. To demonstrate the need for standardization, we highlight a set of possible pitfalls that bring forth a chain of potential problems in the objective assessment of the performance of imputation routines. Additionally, we suggest a course of action for simulating and evaluating missing data problems. Our suggested course of action is by no means meant to serve as a complete cookbook, but rather meant to incite critical thinking and a move to objective and fair evaluations of imputation methodology. We invite the readers of this paper to contribute to the suggested course of action.
Original languageEnglish
Article number2200107
Number of pages12
JournalBiometrical Journal
Volume66
Issue number1
Early online date17 Mar 2023
DOIs
Publication statusPublished - Jan 2024

Keywords

  • evaluation
  • imputation
  • missing data
  • simulation studies

Fingerprint

Dive into the research topics of 'Toward a standardized evaluation of imputation methodology'. Together they form a unique fingerprint.

Cite this