Using the Data Agreement Criterion to rank Experts' beliefs

Duco Veen*, Diederick Stoel, Naomi Schalken, Kees Mulder, Rens van de Schoot

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Experts' beliefs embody a present state of knowledge. It is desirable to take this knowledge into account when making decisions. However, ranking experts based on the merit of their beliefs is a difficult task. In this paper, we show how experts can be ranked based on their knowledge and their level of (un)certainty. By letting experts specify their knowledge in the form of a probability distribution, we can assess how accurately they can predict new data, and how appropriate their level of (un)certainty is. The expert's specified probability distribution can be seen as a prior in a Bayesian statistical setting. We evaluate these priors by extending an existing prior-data (dis)agreement measure, the Data Agreement Criterion, and compare this approach to using Bayes factors to assess prior specification. We compare experts with each other and the data to evaluate their appropriateness. Using this method, new research questions can be asked and answered, for instance: Which expert predicts the new data best? Is there agreement between my experts and the data? Which experts' representation is more valid or useful? Can we reach convergence between expert judgement and data? We provided an empirical example ranking (regional) directors of a large financial institution based on their predictions of turnover.

Original languageEnglish
Article number592
JournalEntropy
Volume20
Issue number8
DOIs
Publication statusPublished - 1 Aug 2018

Keywords

  • Bayes
  • Bayes factor
  • Decision making
  • Expert judgement
  • Kullback-Leibler divergence
  • Prior-data (dis)agreement
  • Ranking

Fingerprint

Dive into the research topics of 'Using the Data Agreement Criterion to rank Experts' beliefs'. Together they form a unique fingerprint.

Cite this