Abstract
In this paper we consider the problem of estimating the relative performance of a given set of related algorithms. The predominant, general approach of doing so involves executing each algorithm instance multiple times, and computing independent estimates based on the performance observations made for each of them. A single execution might be expensive, making this a time-consuming process. We show how an algorithm in general can be viewed as a distribution over executions; and its performance as the expectation of some measure of desirability of an execution, over this distribution. Subsequently, we describe how Importance Sampling can be used to generalize performance observations across algorithms with partially overlapping distributions, amortizing the cost of obtaining them. Finally, we implement the proposed approach as a Proof of Concept and validate it experimentally.
Original language | English |
---|---|
Title of host publication | Learning and Intelligent Optimization |
Subtitle of host publication | 11th International Conference, LION 11, Nizhny Novgorod, Russia, June 19-21, 2017, Revised Selected Papers |
Place of Publication | Cham |
Publisher | Springer |
Edition | 1 |
ISBN (Electronic) | 978-3-319-69404-7 |
ISBN (Print) | 978-3-319-69403-0 |
DOIs | |
Publication status | Published - 26 Oct 2017 |
Externally published | Yes |