Abstract
Hypothesis testing is a challenging topic for many students in introductory university statistics courses. In this paper we explore how automated feedback in an Intelligent Tutoring System can foster students’ ability to carry out hypothesis tests. Students in an experimental group (N = 163) received elaborate feedback on the structure of the hypothesis testing procedure, while students in a control group (N = 151) only received verification feedback. Immediate feedback effects were measured by comparing numbers of attempted tasks, complete solutions, and errors between the groups, while transfer of feedback effects was measured by student performance on follow-up tasks. Results show that students receiving elaborate feedback solved more tasks and made fewer errors than students receiving only verification feedback, which suggests that students benefited from the elaborate feedback.
Original language | English |
---|---|
Title of host publication | Artificial Intelligence in Education |
Subtitle of host publication | 20th International Conference, AIED 2019, Chicago, IL, USA, June 25-29, 2019, Proceedings, Part II |
Editors | S. Isotani, A. Ogan, P. Hastings, B. McLaren, R. Luckin |
Place of Publication | Cham |
Publisher | Springer |
Pages | 281-285 |
Number of pages | 5 |
ISBN (Electronic) | 978-3-030-23207-8 |
ISBN (Print) | 978-3-030-23206-1 |
DOIs | |
Publication status | Published - Jun 2019 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer |
Volume | 11626 |
Keywords
- Domain reasoner
- Hypothesis testing
- Intelligent tutoring systems
- Statistics education