Abstract
Introduction: This study examines the performance of active learning-aided systematic reviews using a deep learning-based model compared to traditional machine learning approaches, and explores the potential benefits of model-switching strategies. Methods: Comprising four parts, the study: 1) analyzes the performance and stability of active learning-aided systematic review; 2) implements a convolutional neural network classifier; 3) compares classifier and feature extractor performance; and 4) investigates the impact of model-switching strategies on review performance. Results: Lighter models perform well in early simulation stages, while other models show increased performance in later stages. Model-switching strategies generally improve performance compared to using the default classification model alone. Discussion: The study's findings support the use of model-switching strategies in active learning-based systematic review workflows. It is advised to begin the review with a light model, such as Naïve Bayes or logistic regression, and switch to a heavier classification model based on a heuristic rule when needed.
| Original language | English |
|---|---|
| Article number | 1178181 |
| Number of pages | 14 |
| Journal | Frontiers in Research Metrics and Analytics |
| Volume | 8 |
| DOIs | |
| Publication status | Published - 16 May 2023 |
Bibliographical note
Publisher Copyright:Copyright © 2023 Teijema, Hofstee, Brouwer, de Bruin, Ferdinands, de Boer, Vizan, van den Brand, Bockting, van de Schoot and Bagheri.
Funding
This project is funded by a grant from the Center for Urban Mental Health, University of Amsterdam, The Netherlands.
| Funders |
|---|
| Center for Urban Mental Health |
| NIZW/ Universiteit van Amsterdam |
Keywords
- active learning
- convolutional neural network
- model switching
- simulations
- systematic review
- work saved over sampling