Automated Detection of Joint Attention and Mutual Gaze in Free Play Parent-Child Interactions

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Observing a child’s interaction with their parents can provide us with important information about the child’s cognitive development. Nonverbal cues such as joint attention and mutual gaze can indicate a child’s engagement, and have diagnostic value. Since manual coding of gaze events during child-parent interactions is time-consuming and error-prone, there is a need for automatic assessment tools, capable of working with camera recordings without specialized eye-tracking equipment. There are few studies in this setting, and accessing naturalistic parent-child videos is difficult. In this paper, we investigate the feasibility of detecting joint attention and mutual gaze in videos. We test approach on challenging data of a child and a parent engaged in free play. By combining multiple off-the-shelf approaches, we manage to create a system that does not require much labeling and is flexible to use for view-independent interaction analysis.
Original languageEnglish
Title of host publicationICMI 2023 Companion - Companion Publication of the 25th International Conference on Multimodal Interaction
Subtitle of host publicationCompanion Publication of the 25th International Conference on Multimodal Interaction
EditorsElisabeth André, Mohamed Chetouani
PublisherAssociation for Computing Machinery (ACM)
Pages374–382
Number of pages9
ISBN (Electronic)9798400703218
ISBN (Print)979-8-4007-0321-8
DOIs
Publication statusPublished - 9 Oct 2023

Publication series

NameACM International Conference Proceeding Series

Keywords

  • cognitive development
  • joint attention
  • mutual gaze
  • parent-child interaction

Fingerprint

Dive into the research topics of 'Automated Detection of Joint Attention and Mutual Gaze in Free Play Parent-Child Interactions'. Together they form a unique fingerprint.

Cite this