Abstract
Large Language Models possess skills such as answering questions, writing essays or solving programming exercises. Since these models are easily accessible, researchers have investigated their capabilities and risks for programming education. This work explores how LLMs can contribute to programming education by supporting students with automated next-step hints. We investigate prompt practices that lead to effective next-step hints and use these insights to build our StAP-tutor. We evaluate this tutor by conducting an experiment with students, and performing expert assessments. Our findings show that most LLM-generated feedback messages describe one specific next step and are personalised to the student’s code and approach. However, the hints may contain misleading information and lack sufficient detail when students approach the end of the assignment. This work demonstrates the potential for LLM-generated feedback, but further research is required to explore its practical implementation.
Original language | English |
---|---|
Title of host publication | ACE 2024 - Proceedings of the 26th Australasian Computing Education Conference, Held in conjunction with |
Subtitle of host publication | Australasian Computer Science Week |
Editors | Nicole Herbert, Carolyn Seton |
Publisher | Association for Computing Machinery |
Pages | 144-153 |
Number of pages | 10 |
ISBN (Electronic) | 9798400716195 |
ISBN (Print) | 979-8-4007-1619-5 |
DOIs | |
Publication status | Published - Jan 2024 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Bibliographical note
Publisher Copyright:© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Keywords
- Generative AI
- Large Language Models
- Next-step hints
- automated feedback
- learning programming