Abstract
Digital exams often fail in assessing all required mathematical skills. Therefore, it is advised that large-scale exams still feature some handwritten open answer questions. However, assessing those handwritten questions with multiple assessors is often a daunting task in terms of grading reliability and feedback. This paper presents a grading approach using semi-automated assessment with atomic feedback. Exam designers preset atomic feedback items with partial grades; next, assessors should just tick the items relevant to a student's answer, even allowing 'blind grading' where the underlying grades are not shown to the assessors. The approach might lead to a smoother and more reliable correction process in which feedback can be communicated to students and not solely grades. The experiment took place during a large-scale math exam organized by the Flemish Exam Commission, and this paper includes preliminary results of assessors' and students' impressions.
Original language | English |
---|---|
Title of host publication | Thirteenth Congress of the European Society for Research in Mathematics Education (CERME13), July 10-14, 2023, Budapest, Hungary (Alfréd Rényi Institute of Mathematics and ERME) |
Editors | Paul Drijvers, Csaba Csapodi, Hanna Palmér, Katalin Gosztonyi, Eszter Kónya |
Publisher | Alfréd Rényi Institute of Mathematics and ERME |
Chapter | TWG21 |
Pages | 4012-4019 |
ISBN (Electronic) | 978-963-7031-04-5 |
ISBN (Print) | 978-963-7031-04-5 |
Publication status | Published - 12 Jan 2024 |
Keywords
- Assessment
- computer-assisted assessment
- state examinations
- feedback
- inter-raterreliability