TY - JOUR
T1 - Active learning-based systematic reviewing using switching classification models
T2 - the case of the onset, maintenance, and relapse of depressive disorders
AU - Teijema, Jelle Jasper
AU - Hofstee, Laura
AU - Brouwer, Marlies
AU - de Bruin, Jonathan
AU - Ferdinands, Gerbrich
AU - de Boer, Jan
AU - Vizan, Pablo
AU - van den Brand, Sofie
AU - Bockting, Claudi
AU - van de Schoot, Rens
AU - Bagheri, Ayoub
N1 - Publisher Copyright:
Copyright © 2023 Teijema, Hofstee, Brouwer, de Bruin, Ferdinands, de Boer, Vizan, van den Brand, Bockting, van de Schoot and Bagheri.
PY - 2023/5/16
Y1 - 2023/5/16
N2 - Introduction: This study examines the performance of active learning-aided systematic reviews using a deep learning-based model compared to traditional machine learning approaches, and explores the potential benefits of model-switching strategies. Methods: Comprising four parts, the study: 1) analyzes the performance and stability of active learning-aided systematic review; 2) implements a convolutional neural network classifier; 3) compares classifier and feature extractor performance; and 4) investigates the impact of model-switching strategies on review performance. Results: Lighter models perform well in early simulation stages, while other models show increased performance in later stages. Model-switching strategies generally improve performance compared to using the default classification model alone. Discussion: The study's findings support the use of model-switching strategies in active learning-based systematic review workflows. It is advised to begin the review with a light model, such as Naïve Bayes or logistic regression, and switch to a heavier classification model based on a heuristic rule when needed.
AB - Introduction: This study examines the performance of active learning-aided systematic reviews using a deep learning-based model compared to traditional machine learning approaches, and explores the potential benefits of model-switching strategies. Methods: Comprising four parts, the study: 1) analyzes the performance and stability of active learning-aided systematic review; 2) implements a convolutional neural network classifier; 3) compares classifier and feature extractor performance; and 4) investigates the impact of model-switching strategies on review performance. Results: Lighter models perform well in early simulation stages, while other models show increased performance in later stages. Model-switching strategies generally improve performance compared to using the default classification model alone. Discussion: The study's findings support the use of model-switching strategies in active learning-based systematic review workflows. It is advised to begin the review with a light model, such as Naïve Bayes or logistic regression, and switch to a heavier classification model based on a heuristic rule when needed.
KW - active learning
KW - convolutional neural network
KW - model switching
KW - simulations
KW - systematic review
KW - work saved over sampling
UR - http://www.scopus.com/inward/record.url?scp=85162901453&partnerID=8YFLogxK
U2 - 10.3389/frma.2023.1178181
DO - 10.3389/frma.2023.1178181
M3 - Article
AN - SCOPUS:85162901453
SN - 2504-0537
VL - 8
JO - Frontiers in Research Metrics and Analytics
JF - Frontiers in Research Metrics and Analytics
M1 - 1178181
ER -