In most classification tasks, wide and deep neural networks perform and generalize better than their smaller counterparts, in particular when they are exposed to large and heterogeneous training sets. However, in the emerging field of Internet of Things memory footprint and energy budget pose severe limits on the size and complexity of the neural models that can be implemented on embedded devices. The Student-Teacher approach is an attractive strategy to distill knowledge from a large network into smaller ones, that can fit on low-energy low-complexity embedded IoT platforms. In this paper, we consider the outdoor sound event detection task as a use case. Building upon the VGGish network, we investigate different distillation strategies to substantially reduce the classifier's size and computational cost with minimal performance losses. Experiments on the UrbanSound8K dataset show that extreme compression factors can be achieved, limiting the accuracy degradation from 75% to 70%. Finally, we compare different embedded platforms to analyze the trade-off between available resources and achievable accuracy.
|Titolo:||Neural Network Distillation on IoT Platforms for Sound Event Detection|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|