Abstract
During game play undesired emotions can arise, impeding players’ positive experiences. Adapting game features based on players’ emotions can help to address this problem, but necessitates a way to detect the current emotional state. We investigate using input parameters on a graphics tablet in combination with in-game performance to unobtrusively detect the players’ current emotional state. We conducted a user study with 48 participants to collect self-reported emotions, input data from the tablet and in-game performance in a serious game teaching players to write Japanese hiragana characters. We synchronized data, extracted 46 features, trained machine learning models, and evaluated their performance to predict levels of valence, arousal, and dominance modeled as a seven class problem. The analysis shows that random forests achieve good accuracies with F 1 scores of .567 to .577 and AUC of .738 to .740, while using input features or in-game performance alone leads to highly decreased performance. Finally, we propose an architecture that developers can use to define undesired emotion levels to provide adaptive content generation in combination with emotion recognition.
Original language | English |
---|---|
Title of host publication | CHI PLAY 2018 - Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play |
Publisher | Association for Computing Machinery |
Pages | 187-199 |
Number of pages | 13 |
ISBN (Electronic) | 9781450356244 |
DOIs | |
Publication status | Published - 23 Oct 2018 |
Event | 5th ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2018 - Melbourne, Australia Duration: 28 Oct 2018 → 31 Oct 2018 |
Publication series
Name | CHI PLAY 2018 - Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play |
---|
Conference
Conference | 5th ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2018 |
---|---|
Country/Territory | Australia |
City | Melbourne |
Period | 28/10/18 → 31/10/18 |
Bibliographical note
Funding Information:We thank the study participants. We thank the Carl-Zeiss Foundation for the partial funding of this work. We thank Johannes Stäbler for aiding in the implementation and Nina Bien and Anna Kolobanova for their help in the data collection. Finally, we thank Jan Gugenheimer, Janek Thomas, and Marcel Walch for their helpful comments on this work.
Publisher Copyright:
©2018 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
Funding
We thank the study participants. We thank the Carl-Zeiss Foundation for the partial funding of this work. We thank Johannes Stäbler for aiding in the implementation and Nina Bien and Anna Kolobanova for their help in the data collection. Finally, we thank Jan Gugenheimer, Janek Thomas, and Marcel Walch for their helpful comments on this work.
Keywords
- Adaptive games
- Classification
- Emotion recognition
- Machine learning
- Performance