Musical Instrument Recognition in User-generated Videos using a Multimodal Convolutional Neural Network Architecture

TitleMusical Instrument Recognition in User-generated Videos using a Multimodal Convolutional Neural Network Architecture
Publication TypeConference Paper
Year of Publication2017
Conference NameACM International Conference on Multimedia Retrieval
AuthorsSlizovskaia, O., Gómez E., & Haro G.
Conference Start Date06/06/2017
PublisherACM Digital Library
Conference LocationBucharest, Romania
ISBN Number978-1-4503-4701-3/17/06
Keywordsconvolutional neural networks, feature fusion, multimedia information retrieval, multimodal musical instrument classi€cation, multimodal video analysis
AbstractThis paper presents a method for recognizing musical instruments in user-generated videos. Musical instrument recognition from music signals is a well-known task in the music information retrieval (MIR) field, where current approaches rely on the analysis of the good-quality audio material. is work addresses a real- world scenario with several research challenges, i.e. the analysis of user-generated videos that are varied in terms of recording conditions and quality and may contain multiple instruments sounding simultaneously and background noise. Our approach does not only focus on the analysis of audio information, but we exploit the multimodal information embedded in the audio and visual domains. In order to do so, we develop a Convolutional Neural Network (CNN) architecture which combines learned representations from both modalities at a late fusion stage. Our approach is trained and evaluated on two large-scale video datasets: YouTube-8M and FCVID. e proposed architectures demonstrate state-of-the-art results in audio and video object recognition, provide additional robustness to missing modalities, and remains computationally cheap to train.
Final publicationhttp://dx.doi.org/10.1145/3078971.3079002
intranet