Note:
This bibliographic page is archived and will no longer be updated.
For an up-to-date list of publications from the Music Technology Group see the
Publications list
.
musicnn: Pre-trained Convolutional Neural Networks for Music Audio Tagging
Title | musicnn: Pre-trained Convolutional Neural Networks for Music Audio Tagging |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Conference Name | International Society for Music Information Retrieval (ISMIR) |
Authors | Pons, J. , & Serra X. |
Conference Start Date | 04/11/2019 |
Conference Location | Delft, Netherlands |
Abstract | Pronounced as "musician", the musicnn library contains a set of pre-trained musically motivated convolutional neural networks for music audio tagging: this https URL. This repository also includes some pre-trained vgg-like baselines. These models can be used as out-of-the-box music audio taggers, as music feature extractors, or as pre-trained models for transfer learning. We also provide the code to train the aforementioned models: this https URL. This framework also allows implementing novel models. For example, a musically motivated convolutional neural network with an attention-based output layer (instead of the temporal pooling layer) can achieve state-of-the-art results for music audio tagging: 90.77 ROC-AUC / 38.61 PR-AUC on the MagnaTagATune dataset --- and 88.81 ROC-AUC / 31.51 PR-AUC on the Million Song Dataset. |
preprint/postprint document | https://arxiv.org/abs/1909.06654 |
Additional material: