A Near-Real-Time Processing Ego Speech Filtering Pipeline Designed for Speech Interruption During Human-Robot Interaction

Yue Li*, Florian A. Kunneman, Koen V. Hindriks

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

With current state-of-the-art (SOTA) automatic speech recognition (ASR) systems, it is not possible to transcribe overlapping speech audio streams separately. Consequently, when these ASR systems are used as part of a social robot like Pepper for interaction with a human, it is common practice to close the robot's microphone while it is talking itself. This prevents the human users to interrupt the robot, which limits speech-based human-robot interaction. To enable a more natural interaction which allows for such interruptions, we propose an audio processing pipeline for filtering out robot's ego speech using only a single-channel microphone. This pipeline takes advantage of the possibility to feed the robot ego speech signal, generated by a text-to-speech API, as training data into a machine learning model. The proposed pipeline combines a convolutional neural network and spectral subtraction to extract overlapping human speech from the audio recorded by the robot-embedded microphone. When evaluating on a held-out test set, we find that this pipeline outperforms our previous approach to this task, as well as SOTA target speech extraction systems that were retrained on the same dataset. We have also integrated the proposed pipeline into a lightweight robot software development framework to make it available for broader use. As a step towards demonstrating the feasibility of deploying our pipeline, we use this framework to evaluate the effectiveness of the pipeline in a small lab-based feasibility pilot using the social robot Pepper. Our results show that when participants interrupt the robot, the pipeline can extract the participant's speech from one-second streaming audio buffers received by the robot-embedded single-channel microphone, hence in near-real time.

Original languageEnglish
Title of host publication33rd IEEE International Conference on Robot and Human Interactive Communication, ROMAN 2024
PublisherIEEE
Pages1370-1377
Number of pages8
ISBN (Electronic)9798350375022
ISBN (Print)9798350375022
DOIs
Publication statusPublished - 30 Oct 2024
Event33rd IEEE International Conference on Robot and Human Interactive Communication, ROMAN 2024 - Pasadena, United States
Duration: 26 Aug 202430 Aug 2024

Publication series

NameIEEE International Workshop on Robot and Human Communication, RO-MAN
ISSN (Print)1944-9445
ISSN (Electronic)1944-9437

Conference

Conference33rd IEEE International Conference on Robot and Human Interactive Communication, ROMAN 2024
Country/TerritoryUnited States
CityPasadena
Period26/08/2430/08/24

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Fingerprint

Dive into the research topics of 'A Near-Real-Time Processing Ego Speech Filtering Pipeline Designed for Speech Interruption During Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this