Abstract
For the objective measurement of animal behavior from video, automated recognition systems are frequently employed. These systems rely on action models learned from labeled example videos. Manually labeling videos of animal behavior however is time consuming and error-prone. We propose to reduce the labeling effort by selecting suitable training instances from the unlabeled corpus and learn the action models iteratively in interaction with the user. Due to the typical imbalance of behavior datasets, a greedy selection strategy would fail to select enough minority class samples. To address the imbalance we first find potential action prototypes by clustering the unlabeled data using a Dirichlet Process Gaussian Mixture Model. We then sample instances from the prototypes and obtain a more balanced training set. We evaluate our system on two rat interaction datasets with different class priors and demonstrate a learning rate that is superior to the baseline.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the International Workshop on Visual observation and analysis of Vertebrate And Insect Behavior (VAIB) |
| Publication status | Published - 2016 |