An Importance Sampling Approach to the Estimation of Algorithm Performance in Automated Algorithm Design

Steven Adriaensen, Filip Moons, Ann Nowé

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

In this paper we consider the problem of estimating the relative performance of a given set of related algorithms. The predominant, general approach of doing so involves executing each algorithm instance multiple times, and computing independent estimates based on the performance observations made for each of them. A single execution might be expensive, making this a time-consuming process. We show how an algorithm in general can be viewed as a distribution over executions; and its performance as the expectation of some measure of desirability of an execution, over this distribution. Subsequently, we describe how Importance Sampling can be used to generalize performance observations across algorithms with partially overlapping distributions, amortizing the cost of obtaining them. Finally, we implement the proposed approach as a Proof of Concept and validate it experimentally.
Original languageEnglish
Title of host publicationLearning and Intelligent Optimization
Subtitle of host publication11th International Conference, LION 11, Nizhny Novgorod, Russia, June 19-21, 2017, Revised Selected Papers
Place of PublicationCham
PublisherSpringer
Edition1
ISBN (Electronic)978-3-319-69404-7
ISBN (Print)978-3-319-69403-0
DOIs
Publication statusPublished - 26 Oct 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'An Importance Sampling Approach to the Estimation of Algorithm Performance in Automated Algorithm Design'. Together they form a unique fingerprint.

Cite this