Note:
This bibliographic page is archived and will no longer be updated.
For an up-to-date list of publications from the Music Technology Group see the
Publications list
.
Audio to Score Matching by Combining Phonetic and Duration Information
Title | Audio to Score Matching by Combining Phonetic and Duration Information |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Conference Name | The 18th International Society for Music Information Retrieval Conference |
Authors | Gong, R. , Pons J. , & Serra X. |
Conference Start Date | 23/10/2017 |
Conference Location | Suzhou, China |
Abstract | We approach the singing phrase audio to score matching problem by using phonetic and duration information – with a focus on studying the jingju a cappella singing case. We argue that, due to the existence of a basic melodic contour for each mode in jingju music, only using melodic information (such as pitch contour) will result in an ambiguous matching. This leads us to propose a matching approach based on the use of phonetic and duration information. Phonetic information is extracted with an acoustic model shaped with our data, and duration information is considered with the Hidden Markov Models (HMMs) variants we investigate. We build a model for each lyric path in our scores and we achieve the matching by ranking the posterior probabilities of the decoded most likely state sequences. Three acoustic models are investigated: (i) convolutional neural networks (CNNs), (ii) deep neural networks (DNNs) and (iii) Gaussian mixture models (GMMs). Also, two duration models are compared: (i) hidden semi-Markov model (HSMM) and (ii) post-processor duration model. Results show that CNNs perform better in our (small) audio dataset and also that HSMM outperforms the post-processor duration model. |
preprint/postprint document | https://arxiv.org/pdf/1707.03547.pdf |
Additional material: