Abstract
Assessing exams with multiple assessors is challenging regarding inter-rater reliability and feedback. This paper presents ‘checkbox grading,’ a digital method where exam designers have predefined checkboxes with both feedback and associated partial grades. Assessors then tick the checkboxes relevant to a student solution. Dependencies between checkboxes ensure consistency among assessors in following the grading scheme. Moreover, the approach supports ‘blind grading’ by hiding the grades associated with the checkboxes, thus focusing assessors on the criteria rather than the scores. The approach was studied during a large-scale mathematics state exam. Results show that assessors perceived checkbox grading as very useful. However, compared to traditional grading—where assessors follow a correction scheme and communicate the resulting grade—more time is spent on checkbox grading, while both approaches are equally reliable. Blind grading improved inter-rater reliability for some tasks. Overall, checkbox grading might lead to a smoother process where feedback, not solely grades, is communicated to students.
Original language | English |
---|---|
Article number | 101443 |
Number of pages | 14 |
Journal | Studies in Educational Evaluation |
Volume | 85 |
DOIs | |
Publication status | Published - Jun 2025 |
Bibliographical note
Publisher Copyright:© 2025 The Authors
Keywords
- Assessment
- Computer-assisted assessment
- Feedback
- Inter-rater reliability
- State examinations