Abstract
Observing a child’s interaction with their parents can provide us with important information about the child’s cognitive development. Nonverbal cues such as joint attention and mutual gaze can indicate a child’s engagement, and have diagnostic value. Since manual coding of gaze events during child-parent interactions is time-consuming and error-prone, there is a need for automatic assessment tools, capable of working with camera recordings without specialized eye-tracking equipment. There are few studies in this setting, and accessing naturalistic parent-child videos is difficult. In this paper, we investigate the feasibility of detecting joint attention and mutual gaze in videos. We test approach on challenging data of a child and a parent engaged in free play. By combining multiple off-the-shelf approaches, we manage to create a system that does not require much labeling and is flexible to use for view-independent interaction analysis.
Original language | English |
---|---|
Title of host publication | ICMI 2023 Companion - Companion Publication of the 25th International Conference on Multimodal Interaction |
Subtitle of host publication | Companion Publication of the 25th International Conference on Multimodal Interaction |
Editors | Elisabeth André, Mohamed Chetouani |
Publisher | Association for Computing Machinery |
Pages | 374–382 |
Number of pages | 9 |
ISBN (Electronic) | 9798400703218 |
ISBN (Print) | 979-8-4007-0321-8 |
DOIs | |
Publication status | Published - 9 Oct 2023 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Bibliographical note
Publisher Copyright:© 2023 ACM.
Funding
The work is partly funded by the China Scholarship Council (CSC).
Funders | Funder number |
---|---|
China Scholarship Council |
Keywords
- cognitive development
- joint attention
- mutual gaze
- parent-child interaction