The Persistence of Most Probable Explanations in Bayesian Networks

Arnoud Pastink, Linda van der Gaag

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Monitoring applications of Bayesian networks require computing a sequence of most probable explanations for the observations from a monitored entity at consecutive time steps. Such applications rapidly become impracticable, especially when computations are performed in real time. In this paper, we argue that a sequence of explanations can often be feasibly computed if consecutive time steps share large numbers of observed features. We show more specifically that we can conclude persistence of an explanation at an early stage of propagation. We present an algorithm that exploits this result to forestall unnecessary re-computation of explanations
Original languageEnglish
Title of host publicationECAI 2014
Subtitle of host publication21st European Conference on Artificial Intelligence, 18–22 August 2014, Prague, Czech Republic – Including Prestigious Applications of Intelligent Systems (PAIS 2014)
EditorsT. Schaub, G. Friedrich, B. O'Sullivan
Pages693-698
Number of pages6
Volume263
ISBN (Electronic) 9781614994190
DOIs
Publication statusPublished - 2014

Fingerprint

Dive into the research topics of 'The Persistence of Most Probable Explanations in Bayesian Networks'. Together they form a unique fingerprint.

Cite this