Abstract
Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more Cyber-Physical Systems tasks become autonomous. Research into the sound of Cyber-Physical Systems has shown that audio side-channel information from a single microphone can be used to accurately model traditional primary state sensor measurements such as speed and gear position. Furthermore, data integration research has shown that information from multiple heterogeneous sources can be integrated to create improved and more reliable data. In this paper, we present a multi-microphone machine learning data fusion approach to accurately predict ascending/hovering/descending states of a multi-rotor UAV in flight. We show that data fusion of multiple audio classifiers predicts these states with accuracies over 94%. Furthermore, we significantly improve the state predictions of single microphones, and outperform several other integration methods. These results add to a growing body of work showing that microphone side-channel approaches can be used in Cyber-Physical Systems to accurately model and improve the assurance of primary sensors measurements.
Original language | English |
---|---|
Title of host publication | 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI 2017, Daegu, Korea (South), November 16-18, 2017 |
Pages | 15-21 |
Number of pages | 7 |
DOIs | |
Publication status | Published - 16 Nov 2017 |