News and Events

Phonos: Acousmatic Concert
Acousmatic Concert organized by Phonos on May 13th 2011, Friday, at 19:30h in the Espai Polivalent.
9 May 2011 - 11:17 | view
Phonos: Music grants for young creators
Phonos has published the call for Music grants 2011 for young creators. The deadline is June 12th 2011.
29 Apr 2011 - 13:51 | view
AureaLabs initiative on voice technologies awarded in VALORTEC contest

A team of MTG researchers (Jordi Bonada, Merlijn Blaauw, Jordi Janer and Oscar Mayor) participated in the VALORTEC Contest on Business Initiatives organized by ACC1Ò. The goal of the participation, which started last November, was to develop a business plan for a spin-off company specialized on voice technologies.

The final event of the VALORTEC contest took place this week, where this initiative, named "Aurea Labs", was awarded with the second prize and an additional prize by the CINC business center.

29 Apr 2011 - 10:55 | view
Music Hack Day in Barcelona

Music Hack Day Barcelona, jointly organized by the MTG-UPF and Sónar, will be a satellite event of the Sónar PRO 2011 festival, held at the Barcelona Contemporary Culture Center (CCCB) on the 16th and 17th of June, 2011.

Music Hack Day is a session of hacking in which participants will conceptualize, create and present their projects: music + software + mobile + hardware + art + the web. Anything goes as long as it's music related!

In this Music Hack Day we will put a special emphasis on involving the artists community. If you are an artist and love creativity, culture and technology, please join us!!

 

 

28 Apr 2011 - 05:33 | view
Phonos: Concert and book presentation by Harry Sparnaak
Concert by Harry Sparnaak with clarinet and electronics plus presentation of his book "The bass clarinet (a personal story)" on Thursday April 14th at 19:30 in Sala Polivalent.
12 Apr 2011 - 08:13 | view
Seminar by Zbigniew Ras on automatic music indexing

On Thursday April 7th 2011 at 15:30 in room 52.321, Zbigniew Ras, from the University of North Carolina and Warsaw University of Technology, will give a research seminar on "Cascade classifiers for automatic music indexing".

Abstract: In a hierarchical decision system S, a group of classifiers can be trained using objects in S partitioned by values of the decision attribute at its all granularity levels. Then, attribute values only at the highest granularity level (corresponding granules are the largest) are used to split S into decision sub-systems where each one is built by selecting objects in S of the same decision value. These sub-systems are used for training new classifiers at all granularity levels of its decision attribute. Each sub-system is split further by sub-values of its decision value. The obtained tree-type structure with groups of classifiers assigned to each of its nodes is called a cascade classifier. In the area of automatic music indexing, this cascade classifier makes a first estimate at the highest level of decision attribute values, which stands for the musical instrument family. Then, the further estimation is done within that specific family range. Experiments have shown better performance of a cascade system than traditional flat classification methods which directly estimate the instrument without higher level of family information analysis. Also, we will introduce the new hierarchical instrument schema according to the clustering results of acoustic features. This new schema better describes the similarity among different instruments or among different playing techniques of the same instrument. The classification results show the higher accuracy of a cascade system with the new schema compared to the traditional schemas.

4 Apr 2011 - 17:56 | view
published the 2nd edition of DAFX book
The second edition of the book "DAFX: Digital Audio Effects", edited by Udo Zölzer, is out. The chapter on Spectral processing has been updated from the first edition by Jordi Bonada and Xavier Serra.
28 Mar 2011 - 14:46 | view
Seminar by Meinard Müller on Music Signal Processing

On Thursday March 24th 2011 at 15:30h in room 52.321, Meinard Müller, from the Max Planck Institute for Informatics, gives a talk on "New Developments in Music Signal Processing".

Abstract: Compared to speech signal processing, the field of music signal processing is a relatively young research discipline. Therefore, many techniques and representations have been transferred from the speech domain to the music domain. However, music signals possess specific acoustic and structural characteristics that are not shared by spoken language or audio signals from other domains. To account for musical dimensions such as pitch or rhythm, specialized audio features that exploit musical characteristics are indispensable in analyzing and processing music data. In fact, many tasks of music signal analysis only become feasible by exploiting suitable music-specific assumptions. In this talk, I address a number of feature design principles that account for various musical aspects. In particular, I show how chroma-based audio features can be enhanced by significantly boosting the degree of timbre invariance without degrading the features' discriminative power. Furthermore, I introduce a novel mid-level representation that captures dominant tempo and pulse information in music recordings. To highlight the practical and musical relevance, I discuss the various feature representations in the context of current music information retrieval tasks including music synchronization, beat tracking, and structure analysis. By giving many audio examples and presenting various prototypical user interfaces, this presentation is directed to a general audience.

21 Mar 2011 - 11:12 | view
Joan Serrà defends his PhD thesis on March 23rd

Joan Serrà defends his PhD thesis entitled "Identification of Versions of the Same Musical Composition by Processing Audio Descriptions" on Wednesday 23rd of March 2011 at 12:00h in room 55.309.

The members of the jury's defense are:  Climent Nadeu (UPC), Ricardo Baeza-Yates (Yahoo! Research and UPF), Meinard Müller (Saarland University & MPI für Informatik).

Thesis abstract: Automatically making sense of digital information, and specially of music dig- ital documents, is an important problem our modern society is facing. In fact, there are still many tasks that, although being easily performed by humans, cannot be effectively performed by a computer. In this work we focus on one of such tasks: the identification of musical piece versions (alternate renditions of the same musical composition like cover songs, live recordings, remixes, etc.). In particular, we adopt a computational approach solely based on the information provided by the audio signal. We propose a system for version identification that is robust to the main musical changes between versions, including timbre, tempo, key and structure changes. Such a system exploits nonlinear time series analysis tools and standard methods for quantitative mu- sic description, and it does not make use of a specific modeling strategy for data extracted from audio, i.e. it is a model-free system. We report remarkable accuracies for this system, both with our data and through an international evaluation framework. Indeed, according to this framework, our model-free approach achieves the highest accuracy among current version identification systems (up to the moment of writing this thesis). Model-based approaches are also investigated. For that we consider a number of linear and nonlinear time series models. We show that, although model-based approaches do not reach the highest accuracies, they present a number of advantages, specially with regard to computational complexity and parameter setting. In addition, we explore post-processing strategies for version identification systems, and show how unsupervised grouping algorithms allow the characterization and enhancement of the output of query-by-example systems such as the version identification ones. To this end, we build and study a complex network of versions and apply clustering and community detection algorithms. Overall, our work brings automatic version identification to an unprecedented stage where high accuracies are achieved and, at the same time, explores promising directions for future research. Although our steps are guided by the nature of the considered signals (music recordings) and the characteristics of the task at hand (version identification), we believe our methodology can be easily trans- ferred to other contexts and domains.

18 Mar 2011 - 15:02 | view
Phonos: Audiovisual concert
Concert on Tuesday March 22nd at 19:30 in the Espai Polivalent organized by Phonos and including works with audiovisual media, viola, flute and electronics.
17 Mar 2011 - 11:55 | view
intranet