Abstract
This article investigates the potential of using Artificial Intelligence (AI) to assess students’ critical thinking skills in higher education. With the growing adoption of AI technologies in educational assessment, there are prospects for streamlining evaluation processes. However, integrating AI in critical thinking assessment remains underexplored. To address this gap, we compare the grading of an educator with that generated by ChatGPT on a critical thinking test for university students. We employ a mixed-methods approach: (a) a quantitative comparison of scores and (b) a thematic analysis to explore the rationale behind the scores. The findings suggest that while AI offers broader contextual feedback, human evaluators provide precision and adherence to grading rubrics, and that universities should consider a hybrid, human and AI evaluation approach. This study contributes to the discourse on how to integrate AI into assessment practices in higher education while addressing issues of transparency and interpretability.
Original language | English |
---|---|
Number of pages | 14 |
Journal | International Journal of Human-Computer Interaction |
DOIs | |
Publication status | E-pub ahead of print - 14 May 2025 |
Bibliographical note
Publisher Copyright:© 2025 The Author(s). Published with license by Taylor & Francis Group, LLC.
Funding
We would like to thank our research team and student assistants for their support during the data collection. We are grateful to all the participants for their valuable contribution to our study.
Funders | Funder number |
---|---|
Federal Ministry of Education and Research (BMBF) |
Keywords
- Artificial intelligence in education (AIEd)
- ChatGPT
- critical thinking
- higher education assessment