Learning Sound Event Classifiers from Web Audio with Noisy Labels

TitleLearning Sound Event Classifiers from Web Audio with Noisy Labels
Publication TypeConference Paper
Year of Publication2019
Conference NameInternational Conference on Acoustics, Speech and Signal Processing (ICASSP)
AuthorsFonseca, E., Plakal M., Ellis D. P. W., Font F., Favory X., & Serra X.
Conference Start Date12/05/2019
PublisherIEEE
Conference LocationBrighton, UK
AbstractAs sound event classification moves towards larger datasets, issues of label noise become inevitable. Web sites can supply large volumes of user-contributed audio and metadata, but inferring labels from this metadata introduces errors due to unreliable inputs, and limitations in the mapping. There is, however, little research into the impact of these errors. To foster the investigation of label noise in sound event classification we present FSDnoisy18k, a dataset containing 42.5 hours of audio across 20 sound classes, including a small amount of manually-labeled data and a larger quantity of real-world noisy data. We characterize the label noise empirically, and provide a CNN baseline system. Experiments suggest that training with large amounts of noisy data can outperform training with smaller amounts of carefully-labeled data. We also show that noise-robust loss functions can be effective in improving performance in presence of corrupted labels.
preprint/postprint documenthttps://arxiv.org/abs/1901.01189
intranet