Machine Learning—Evaluation (Cross-validation, Metrics, Importance Scores...)

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

Abstract

The high performance of machine learning (ML) techniques when handling different data analytics tasks resulted in developing a large number of models. Although these models can provide multiple options for performing the task at hand, selecting the right model becomes more challenging. As the ML models perform differently based on the nature of the data and the application, designing a good evaluation process would help in selecting the appropriate ML model. Considering the nature of the ML model and the user’s interest, different evaluation experiments can be designed to get better insights about the performance of the model. In this chapter, we discuss different evaluation techniques that suit both supervised and unsupervised models including cross-validation and bootstrap. Moreover, we present a set of performance measures that can be used as an indication on how the model would perform in real applications. For each of the performance measures, we discuss the optimal values that can be achieved by a given model and what should be considered as acceptable. We also show the relationship between the different measures, which can give more insights when interpreting the results of a given ML model.
Original languageEnglish
Title of host publicationClinical Applications of Artificial Intelligence in Real-World Data
EditorsFolkert W. Asselbergs, Spiros Denaxas, Daniel L. Oberski, Jason H. Moore
Place of PublicationCham
PublisherSpringer
Pages175–187
Edition1
ISBN (Electronic)978-3-031-36678-9
ISBN (Print)978-3-031-36677-2, 978-3-031-36680-2
DOIs
Publication statusPublished - 5 Nov 2023

Keywords

  • Machine learning
  • Training
  • Testing
  • Regression
  • Classification
  • Clustering

Fingerprint

Dive into the research topics of 'Machine Learning—Evaluation (Cross-validation, Metrics, Importance Scores...)'. Together they form a unique fingerprint.

Cite this