Code repository for: "Evaluation of the performance of neural network models and classical models in the context of Active Learning for Systematic Reviewing"

Pablo Vizán Siso, Jonathan de Bruin, R. van de Schoot, Laura Hofstee, Ayoub Bagheri

Research output: Non-textual formSoftwareAcademic

Abstract

Repository containing the code necessary to reproduce the results of my master´s thesis, "Evaluation of the performance of neural network models and classical models in the context of Active Learning for Systematic Reviewing". The purpose of this study was to evaluate the performance of different classifiers and feature extraction strategies on active learning tasks applied to systematic reviewing. This repository contains the necessary scripts to run the simulation and obtain relevant information about each one (runtime, metrics and plots). The data used for the simulations was originally retrieved by Brouwer, Williams et al. Simulations were run using an open source platform for applying active learning to systematic reviewing tasks, called ASReview.
Original languageEnglish
DOIs
Publication statusPublished - 2021

Keywords

  • simulation active-learning asreview neural-network python systematic-review utrecht-university

Fingerprint

Dive into the research topics of 'Code repository for: "Evaluation of the performance of neural network models and classical models in the context of Active Learning for Systematic Reviewing"'. Together they form a unique fingerprint.

Cite this