Did You Get That? Predicting Learners' Comprehension of a Video Lecture from Visualizations of Their Gaze Data

Ellen M. M. Kok, Halszka Jarodzka, Matt Sibbald, Tamara van Gog

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In online lectures, unlike in face-to-face lectures, teachers lack access to (nonverbal) cues to check if their students are still "with them" and comprehend the lecture. The increasing availability of low-cost eye-trackers provides a promising solution. These devices measure unobtrusively where students look and can visualize these data to teachers. These visualizations might inform teachers about students' level of "with-me-ness" (i.e., do students look at the information that the teacher is currently talking about) and comprehension of the lecture, provided that (1) gaze measures of "with-me-ness" are related to comprehension, (2) people not trained in eye-tracking can predict students' comprehension from gaze visualizations, (3) we understand how different visualization techniques impact this prediction. We addressed these issues in two studies. In Study 1, 36 students watched a video lecture while being eye-tracked. The extent to which students looked at relevant information and the extent to which they looked at the same location as the teacher both correlated with students' comprehension (score on an open question) of the lecture. In Study 2, 50 participants watched visualizations of students' gaze (from Study 1), using six visualization techniques (dynamic and static versions of scanpaths, heatmaps, and focus maps) and were asked to predict students' posttest performance and to rate their ease of prediction. We found that people can use gaze visualizations to predict learners' comprehension above chance level, with minor differences between visualization techniques. Further research should investigate if teachers can act on the information provided by gaze visualizations and thereby improve students' learning.
Original languageEnglish
Article numbere13247
Pages (from-to)1-39
Number of pages39
JournalCognitive Science
Volume47
Issue number2
DOIs
Publication statusPublished - Feb 2023

Bibliographical note

Publisher Copyright:
© 2022 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS).

Funding

The authors would like to thank Marja Erisman for help with data collection, Stan Linders en Jan Bouwman for scoring the open questions, and Jos Jaspers for help with data preprocessing. We would like to thank Herbert Hoijtink for his help with the BAIN analyses. This research was funded by an NRO PROO grant (project number 405-17-301).

FundersFunder number
Nationaal Regieorgaan Onderwijsonderzoek405‐17‐301

    Keywords

    • Eye-tracking
    • Gaze
    • Gaze visualization
    • Teacher assessment
    • Video lectures
    • With-me-ness

    Fingerprint

    Dive into the research topics of 'Did You Get That? Predicting Learners' Comprehension of a Video Lecture from Visualizations of Their Gaze Data'. Together they form a unique fingerprint.

    Cite this