Abstract
Matching objects across partially overlapping camera views is crucial in multi-camera systems and requires a view-invariant feature extraction network. Training such a network with cycle-consistency circumvents the need for labor-intensive labeling. In this paper, we extend the mathematical formulation of cycle-consistency to handle partial overlap. We then introduce a pseudo-mask which directs the training loss to take partial overlap into account. We additionally present several new cycle variants that complement each other and present a time-divergent scene sampling scheme that improves the data input for this self-supervised setting. Cross-camera matching experiments on the challenging DIVOTrack dataset show the merits of our approach. Compared to the self-supervised state-of-the-art, we achieve a 4.3 percentage point higher F1 score with our combined contributions. Our improvements are robust to reduced overlap in the training data, with substantial improvements in challenging scenes that need to make few matches between many people. Self-supervised feature networks trained with our method are effective at matching objects in a range of multi-camera settings, providing opportunities for complex tasks like large-scale multi-camera scene understanding.
Original language | English |
---|---|
Pages (from-to) | 19-29 |
Number of pages | 11 |
Journal | Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Volume | 3 |
DOIs | |
Publication status | Published - 2025 |
Event | 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2025 - Porto, Portugal Duration: 26 Feb 2025 → 28 Feb 2025 |
Bibliographical note
Publisher Copyright:© 2025 by SCITEPRESS - Science and Technology Publications, Lda.
Keywords
- Cross-View Multi-Object Tracking
- Cycle-Consistency
- Feature Learning
- Multi-Camera
- Self-Supervision