Performance of active learning models for screening prioritization in systematic reviews: a simulation study into the Average Time to Discover relevant records

Gerbrich Ferdinands*, Raoul Schram, Jonathan de Bruin, Ayoub Bagheri, Daniel L. Oberski, Lars Tummers, Jelle Jasper Teijema, Rens van de Schoot

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Background: Conducting a systematic review demands a significant amount of effort in screening titles and abstracts. To accelerate this process, various tools that utilize active learning have been proposed. These tools allow the reviewer to interact with machine learning software to identify relevant publications as early as possible. The goal of this study is to gain a comprehensive understanding of active learning models for reducing the workload in systematic reviews through a simulation study. Methods: The simulation study mimics the process of a human reviewer screening records while interacting with an active learning model. Different active learning models were compared based on four classification techniques (naive Bayes, logistic regression, support vector machines, and random forest) and two feature extraction strategies (TF-IDF and doc2vec). The performance of the models was compared for six systematic review datasets from different research areas. The evaluation of the models was based on the Work Saved over Sampling (WSS) and recall. Additionally, this study introduces two new statistics, Time to Discovery (TD) and Average Time to Discovery (ATD). Results: The models reduce the number of publications needed to screen by 91.7 to 63.9% while still finding 95% of all relevant records (WSS@95). Recall of the models was defined as the proportion of relevant records found after screening 10% of of all records and ranges from 53.6 to 99.8%. The ATD values range from 1.4% till 11.7%, which indicate the average proportion of labeling decisions the researcher needs to make to detect a relevant record. The ATD values display a similar ranking across the simulations as the recall and WSS values. Conclusions: Active learning models for screening prioritization demonstrate significant potential for reducing the workload in systematic reviews. The Naive Bayes + TF-IDF model yielded the best results overall. The Average Time to Discovery (ATD) measures performance of active learning models throughout the entire screening process without the need for an arbitrary cut-off point. This makes the ATD a promising metric for comparing the performance of different models across different datasets.

Original languageEnglish
Article number100
Number of pages12
JournalSystematic Reviews
Volume12
Issue number1
DOIs
Publication statusPublished - Dec 2023

Bibliographical note

Funding Information:
This project was funded by the Innovation Fund for IT in Research Projects, Utrecht University, The Netherlands. Access to the Cartesius supercomputer was granted by SURFsara (ID EINF-156). Both the Innovation Fund and SURFsara had no role whatsoever in the design of the current study, nor in the data collection, analysis and interpretation, nor in writing the manuscript.

Publisher Copyright:
© 2023, The Author(s).

Funding

This project was funded by the Innovation Fund for IT in Research Projects, Utrecht University, The Netherlands. Access to the Cartesius supercomputer was granted by SURFsara (ID EINF-156). Both the Innovation Fund and SURFsara had no role whatsoever in the design of the current study, nor in the data collection, analysis and interpretation, nor in writing the manuscript.

Keywords

  • Active learning
  • Computer simulation
  • Data mining
  • Machine learning
  • Screening prioritization
  • Systematic reviews

Fingerprint

Dive into the research topics of 'Performance of active learning models for screening prioritization in systematic reviews: a simulation study into the Average Time to Discover relevant records'. Together they form a unique fingerprint.

Cite this