TY - GEN
T1 - Explaining Recommendations in E-Learning
T2 - 27th International Conference on Intelligent User Interfaces, IUI 2022
AU - Ooge, Jeroen
AU - Kato, Shotallo
AU - Verbert, Katrien
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/3/22
Y1 - 2022/3/22
N2 - In the scope of explainable artificial intelligence, explanation techniques are heavily studied to increase trust in recommender systems. However, studies on explaining recommendations typically target adults in e-commerce or media contexts; e-learning has received less research attention. To address these limits, we investigated how explanations affect adolescents' initial trust in an e-learning platform that recommends mathematics exercises with collaborative filtering. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that real explanations significantly increased initial trust when trust was measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, this result did not hold when trust was measured one-dimensionally. Furthermore, not all adolescents attached equal importance to explanations and trust scores were high overall. These findings underline the need to tailor explanations and suggest that dynamically learned factors may be more important than explanations for building initial trust. To conclude, we thus reflect upon the need for explanations and recommendations in e-learning in low-stakes and high-stakes situations.
AB - In the scope of explainable artificial intelligence, explanation techniques are heavily studied to increase trust in recommender systems. However, studies on explaining recommendations typically target adults in e-commerce or media contexts; e-learning has received less research attention. To address these limits, we investigated how explanations affect adolescents' initial trust in an e-learning platform that recommends mathematics exercises with collaborative filtering. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that real explanations significantly increased initial trust when trust was measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, this result did not hold when trust was measured one-dimensionally. Furthermore, not all adolescents attached equal importance to explanations and trust scores were high overall. These findings underline the need to tailor explanations and suggest that dynamically learned factors may be more important than explanations for building initial trust. To conclude, we thus reflect upon the need for explanations and recommendations in e-learning in low-stakes and high-stakes situations.
KW - education
KW - explainability
KW - interpretability
KW - teenagers
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85127773688&partnerID=8YFLogxK
U2 - 10.1145/3490099.3511140
DO - 10.1145/3490099.3511140
M3 - Conference contribution
AN - SCOPUS:85127773688
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 93
EP - 105
BT - 27th International Conference on Intelligent User Interfaces, IUI 2022
PB - Association for Computing Machinery
Y2 - 22 March 2022 through 25 March 2022
ER -