Next-Step Hint Generation for Introductory Programming Using Large Language Models

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Large Language Models possess skills such as answering questions, writing essays or solving programming exercises. Since these models are easily accessible, researchers have investigated their capabilities and risks for programming education. This work explores how LLMs can contribute to programming education by supporting students with automated next-step hints. We investigate prompt practices that lead to effective next-step hints and use these insights to build our StAP-tutor. We evaluate this tutor by conducting an experiment with students, and performing expert assessments. Our findings show that most LLM-generated feedback messages describe one specific next step and are personalised to the student’s code and approach. However, the hints may contain misleading information and lack sufficient detail when students approach the end of the assignment. This work demonstrates the potential for LLM-generated feedback, but further research is required to explore its practical implementation.
Original languageEnglish
Title of host publicationACE 2024 - Proceedings of the 26th Australasian Computing Education Conference, Held in conjunction with
Subtitle of host publicationAustralasian Computer Science Week
EditorsNicole Herbert, Carolyn Seton
PublisherAssociation for Computing Machinery
Pages144-153
Number of pages10
ISBN (Electronic)9798400716195
ISBN (Print)979-8-4007-1619-5
DOIs
Publication statusPublished - Jan 2024

Publication series

NameACM International Conference Proceeding Series

Keywords

  • Generative AI
  • Large Language Models
  • Next-step hints
  • automated feedback
  • learning programming

Fingerprint

Dive into the research topics of 'Next-Step Hint Generation for Introductory Programming Using Large Language Models'. Together they form a unique fingerprint.

Cite this