Audio Data Augmentation with respect to Musical Instrument Recognition

TitleAudio Data Augmentation with respect to Musical Instrument Recognition
Publication TypeMaster Thesis
Year of Publication2017
AuthorsBhardwaj, S.
AbstractIdentifying musical instruments in a polyphonic music recording is a difficult yet crucial problem in music information retrieval. It helps in auto-tagging of a musical piece by instrument, consequently enabling searching music databases by instrument. Other useful applications of instrument recognition are source separation, genre recognition, music transcription, and instrument specific equalizations. We review the state of the art methods for the task, including the recent Convolutional Neural Networks based approaches. These deep learning models require large quantities of annotated data, a problem which can be partly solved by synthetic data augmentation. We study different types of audio data transformations that can help in various audio related tasks, publishing an augmentation library in the process. We investigate the effect of using augmented data during the training process of three state of the art CNN based models. We achieved a performance improvement of 2% over the best performing model with almost half the number of trainable model parameters. We attained 6% performance improvement for the single-layer CNN architecture, and 4% for the multi-layer architecture. Also, we study the influence of each type of audio augmentation on each instrument class individually.
KeywordsAutomatic Instrument Recognition, convolutional neural networks, Data augmentation
Final publicationhttps://doi.org/10.5281/zenodo.1066137
intranet