Note: This bibliographic page is archived and will no longer be updated. For an up-to-date list of publications from the Music Technology Group see the Publications list .

Phonetic-based mappings in voice-driven sound synthesis

Title Phonetic-based mappings in voice-driven sound synthesis
Publication Type Conference Paper
Year of Publication 2007
Conference Name International Conference on Signal Processing and Multimedia Applications
Authors Janer, J. , & Maestre E.
Conference Location Barcelona, Spain
Abstract In voice-driven sound synthesis applications, phonetics convey musical information that might be related to the sound of an imitated musical instrument. Our initial hypothesis is that phonetics are user- and instrument-dependent, but they remain constant for a single subject and instrument. Hence, a user-adapted system is proposed, where mappings depend on how subjects performs musical articulations given a set of examples. The system will consist of, first, a voice imitation segmentation module that automatically determines note-to-note transitions. Second, a classifier determines the type of musical articulation for each transition from a set of phonetic features. For validating our hypothesis, we run an experiment where a number of subjects imitated real instrument recordings with the voice. Instrument recordings consisted of short phrases of sax and violin performed in three grades of musical articulation labeled as staccato, normal, legato. The results of a supervised training classifier (user-dependent) are compared to a classifier based on heuristic rules (userindependent). Finally, with the previous results we improve the quality of a sample-concatenation synthesizer by selecting the most appropriate samples.
preprint/postprint document files/publications/e5dbb0-SIGMAP07_jjaner_emaestre.pdf