Multi-modal Score Fusion and Decision Trees for Explainable Automatic Job Candidate Screening from Video CVs

Heysem Kaya, Furkan Gurpinar, Albert Ali Salah

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

We describe an end-to-end system for explainable automatic job candidate screening from video CVs. In this application, audio, face and scene features are first computed from an input video CV, using rich feature sets. These multiple modalities are fed into modality-specific regressors to predict apparent personality traits and a variable that predicts whether the subject will be invited to the interview. The base learners are stacked to an ensemble of decision trees to produce the outputs of the quantitative stage, and a single decision tree, combined with a rule-based algorithm produces interview decision explanations based on the quantitative results. The proposed system in this work ranks first in both quantitative and qualitative stages of the CVPR 2017 ChaLearn Job Candidate Screening Coopetition.

Original languageEnglish
Title of host publicationProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017
PublisherIEEE Computer Society Press
Pages1651-1659
Number of pages9
Volume2017-July
ISBN (Electronic)9781538607336
DOIs
Publication statusPublished - 22 Aug 2017
Event30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017 - Honolulu, United States
Duration: 21 Jul 201726 Jul 2017

Conference

Conference30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017
Country/TerritoryUnited States
CityHonolulu
Period21/07/1726/07/17

Fingerprint

Dive into the research topics of 'Multi-modal Score Fusion and Decision Trees for Explainable Automatic Job Candidate Screening from Video CVs'. Together they form a unique fingerprint.

Cite this