Automated feedback on the structure of hypothesis tests

S.G. Tacoma, B.J. Heeren, J.T. Jeuring, P.H.M. Drijvers

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Hypothesis testing is a challenging topic for many students in introductory university statistics courses. In this paper we explore how automated feedback in an Intelligent Tutoring System can foster students’ ability to carry out hypothesis tests. Students in an experimental group (N = 163) received elaborate feedback on the structure of the hypothesis testing procedure, while students in a control group (N = 151) only received verification feedback. Immediate feedback effects were measured by comparing numbers of attempted tasks, complete solutions, and errors between the groups, while transfer of feedback effects was measured by student performance on follow-up tasks. Results show that students receiving elaborate feedback solved more tasks and made fewer errors than students receiving only verification feedback, which suggests that students benefited from the elaborate feedback.
Original languageEnglish
Title of host publicationArtificial Intelligence in Education
Subtitle of host publication20th International Conference, AIED 2019, Chicago, IL, USA, June 25-29, 2019, Proceedings, Part II
EditorsS. Isotani, A. Ogan, P. Hastings, B. McLaren, R. Luckin
Place of PublicationCham
PublisherSpringer
Pages281-285
Number of pages5
ISBN (Electronic)978-3-030-23207-8
ISBN (Print)978-3-030-23206-1
DOIs
Publication statusPublished - Jun 2019

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume11626

Keywords

  • Domain reasoner
  • Hypothesis testing
  • Intelligent tutoring systems
  • Statistics education

Fingerprint

Dive into the research topics of 'Automated feedback on the structure of hypothesis tests'. Together they form a unique fingerprint.

Cite this