Kattis vs ChatGPT: Assessment and Evaluation of Programming Tasks in the Age of Artificial Intelligence

Nora Dunder, Saga Lundborg, Jacqueline Wong, Olga Viberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

AI-powered education technologies can support students and teachers in computer science education. However, with the recent developments in generative AI, and especially the increasingly emerging popularity of ChatGPT, the effectiveness of using large language models for solving programming tasks has been underexplored. The present study examines ChatGPT’s ability to generate code solutions at different difficulty levels for introductory programming courses. We conducted an experiment where ChatGPT was tested on 127 randomly selected programming problems provided by Kattis, an automatic software grading tool for computer science programs, often used in higher education. The results showed that ChatGPT independently could solve 19 out of 127 programming tasks generated and assessed by Kattis. Further, ChatGPT was found to be able to generate accurate code solutions for simple problems but encountered difficulties with more complex programming tasks. The results contribute to the ongoing debate on the utility of AI-powered tools in programming education.
Original languageEnglish
Title of host publicationLAK '24: Proceedings of the 14th Learning Analytics and Knowledge Conference
PublisherAssociation for Computing Machinery
Pages821-827
Number of pages7
DOIs
Publication statusPublished - 18 Mar 2024

Keywords

  • Academic Integrity
  • Automated Grading
  • ChatGPT
  • Programming Education

Fingerprint

Dive into the research topics of 'Kattis vs ChatGPT: Assessment and Evaluation of Programming Tasks in the Age of Artificial Intelligence'. Together they form a unique fingerprint.

Cite this