TY - GEN
T1 - Feedback, Control, or Explanations? Supporting Teachers With Steerable Distractor-Generating AI
AU - Szymanski, Maxwell
AU - Ooge, Jeroen
AU - De Croon, Robin
AU - Vanden Abeele, Vero
AU - Verbert, Katrien
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/3/18
Y1 - 2024/3/18
N2 - Recent advancements in Educational AI have focused on models for automatic question generation. Yet, these advancements face challenges: (1) their "black-box"nature limits transparency, thereby obscuring the decision-making process; and (2) their novelty sometimes causes inaccuracies due to limited feedback systems. Explainable AI (XAI) aims to address the first limitation by clarifying model decisions, while Interactive Machine Learning (IML) emphasises user feedback and model refinement. However, both XAI and IML solutions primarily serve AI experts, often neglecting novices like teachers. Such oversights lead to issues like misaligned expectations and reduced trust. Following the user-centred design method, we collaborated with teachers and ed-tech experts to develop an AI-aided system for generating multiple-choice question distractors, which incorporates feedback, control, and visual explanations. Evaluating these through semi-structured interviews with 12 teachers, we found a strong preference for the feedback feature, enabling teacher-guided AI improvements. Control and explanations' usefulness was largely dependent on model performance: they were valued when the model performed well. If the model did not perform well, teachers sought context over AI-centric explanations, suggesting a tilt towards data-centric explanations. Based on these results, we propose guidelines for creating tools that enable teachers to steer and interact with question-generating AI models.
AB - Recent advancements in Educational AI have focused on models for automatic question generation. Yet, these advancements face challenges: (1) their "black-box"nature limits transparency, thereby obscuring the decision-making process; and (2) their novelty sometimes causes inaccuracies due to limited feedback systems. Explainable AI (XAI) aims to address the first limitation by clarifying model decisions, while Interactive Machine Learning (IML) emphasises user feedback and model refinement. However, both XAI and IML solutions primarily serve AI experts, often neglecting novices like teachers. Such oversights lead to issues like misaligned expectations and reduced trust. Following the user-centred design method, we collaborated with teachers and ed-tech experts to develop an AI-aided system for generating multiple-choice question distractors, which incorporates feedback, control, and visual explanations. Evaluating these through semi-structured interviews with 12 teachers, we found a strong preference for the feedback feature, enabling teacher-guided AI improvements. Control and explanations' usefulness was largely dependent on model performance: they were valued when the model performed well. If the model did not perform well, teachers sought context over AI-centric explanations, suggesting a tilt towards data-centric explanations. Based on these results, we propose guidelines for creating tools that enable teachers to steer and interact with question-generating AI models.
KW - automated question generation
KW - Interactive Machine Learning
KW - user control
KW - user studies
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85187552036&partnerID=8YFLogxK
U2 - 10.1145/3636555.3636933
DO - 10.1145/3636555.3636933
M3 - Conference contribution
AN - SCOPUS:85187552036
SN - 9798400716188
T3 - ACM International Conference Proceeding Series
SP - 690
EP - 700
BT - LAK 2024 Conference Proceedings - 14th International Conference on Learning Analytics and Knowledge
PB - Association for Computing Machinery
T2 - 14th International Conference on Learning Analytics and Knowledge, LAK 2024
Y2 - 18 March 2024 through 22 March 2024
ER -