Abstract
Deep convolutional networks are widely used in video action recognition. 3D convolutions are one prominent approach to deal with the additional time dimension. While 3D convolutions typically lead to higher accuracies, the inner workings of the trained models are more difficult to interpret. We focus on creating human-understandable visual explanations that represent the hierarchical parts of spatio-temporal networks. We introduce Class Feature Pyramids, a method that traverses the entire network structure and incrementally discovers kernels at different network depths that are informative for a specific class. Our method does not depend on the network's architecture or the type of 3D convolutions, supporting grouped and depth-wise convolutions, convolutions in fibers, and convolutions in branches. We demonstrate the method on six state-of-the-art 3D convolution neural networks (CNNs) on three action recognition (Kinetics-400, UCF-101, and HMDB-51) and two egocentric action recognition datasets (EPIC-Kitchens and EGTEA Gaze+).
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) |
Publisher | IEEE |
Pages | 4255-4264 |
ISBN (Electronic) | 978-1-7281-5023-9 |
ISBN (Print) | 978-1-7281-5024-6 |
DOIs | |
Publication status | Published - 2019 |
Event | IEEE International Conference on Computer Vision Workshops 2019 - Seoul, Korea, Republic of Duration: 27 Oct 2019 → 2 Nov 2019 |
Workshop
Workshop | IEEE International Conference on Computer Vision Workshops 2019 |
---|---|
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 27/10/19 → 2/11/19 |
Keywords
- Visual Explanations
- Explainable Convolutions
- Spatio-temporal feature representation
- Feature extraction
- Kernel
- Visualization
- Convolutional codes
- Three-dimensional displays
- Biological neural networks
- Complexity theory