What/when causal expectation modelling applied to audio signals

TitleWhat/when causal expectation modelling applied to audio signals
Publication TypeJournal Article
Year of Publication2009
AuthorsHazan, A., Marxer R., Brossier P., Purwins H., Herrera P., & Serra X.
Journal TitleConnection Science
Volume21
Issue2-3
Pages119 – 143
Journal Date06/2009
Short TitleWhat/when causal expectation modelling applied to audio signals
ISSN0954-0091
Abstract

A causal system to represent a stream of music into musical events, and generate further expected events, is presented. Starting from an auditory front-end which extracts low-level (i.e. MFCC) and mid-level features such as onsets and beats, an unsupervised clustering process builds and maintains a set of symbols aimed at representing musical stream events using both timbre and time descriptions. The time events are represented using inter-onset intervals relative to the beats. These symbols are then processed by an expectation module using Predictive Partial Match, a multiscale technique based on N-grams. To characterize the ability of the system to generate an expectation that matches both ground truth and system transcription, we introduce several measures that take into account the uncertainty associated with the unsupervised encoding of the musical sequence. The system is evaluated using a subset of the ENST-drums database of annotated drum recordings. We compare three approaches to combine timing (when) and timbre (what) expectation. In our experiments, we show that the induced representation is useful for generating expectation patterns in a causal fashion.

preprint/postprint documenthttp://www.mtg.upf.es/files/publications/whatwhen_connection.pdf
Final publication10.1080/09540090902733764
intranet