Fully-attentive and interpretable: vision and video vision transformers for pain detection

Giacomo Fiorentini, Itir Önal Ertuğrul, Albert Salah

Research output: Contribution to conferencePaperAcademic

Abstract

Pain is a serious and costly issue globally, but to be treated, it must first be detected. Vision transformers are a top-performing architecture in computer vision, with little research on their use for pain detection. In this paper, we propose the first fully-attentive automated pain detection pipeline that achieves state-of-the-art performance on binary pain detection from facial expressions. The model is trained on the UNBC-McMaster dataset, after faces are 3D-registered and rotated to the canonical frontal view. In our experiments we identify important areas of the hyperparameter space and their interaction with vision and video vision transformers, obtaining 3 noteworthy models. We analyse the attention maps of one of our models, finding reasonable interpretations for its predictions. We also evaluate Mixup, an augmentation technique, and Sharpness-Aware Minimization, an optimizer, with no success. Our presented models, ViT-1 (F1 score 0.55 +- 0.15), ViViT-1 (F1 score 0.55 +- 0.13), and ViViT-2 (F1 score 0.49 +- 0.04), all outperform earlier works, showing the potential of vision transformers for pain detection.
Original languageEnglish
Number of pages12
Publication statusPublished - Dec 2022
EventNeurIPS 2022: Thirty-Sixth Conference on Neural Information Processing Systems. -
Duration: 28 Nov 20229 Dec 2022

Conference

ConferenceNeurIPS 2022
Period28/11/229/12/22

Fingerprint

Dive into the research topics of 'Fully-attentive and interpretable: vision and video vision transformers for pain detection'. Together they form a unique fingerprint.

Cite this