News and Events

CompMusic Seminar
23 Feb 2017

On February 23rd 2017, Thursday, from 9:30h to 14:00h in room 55.309 of the Communication Campus of the Universitat Pompeu Fabra in Barcelona, we will have a CompMusic seminar. This seminar accompanies the PhD thesis defenses of Gopala Krishna Koduri and Sertan Şentürk that takes place the previous day.

9:30 Gerhard Widmer (Johannes Kepler University, Linz, Austria)
"Con Espressione! - An Update from the Computational Performance Modelling Front"
Computational models of expressive music performance have been a target of considerable research efforts in the past 20 years. Motivated by the desire to gain a deeper understanding of the workings of this complex art, various research groups have proposed different classes of computational models (rule-based, case-based, machine-learning-based) for different parametric dimensions of expressive performance, and it has been demonstrated in various studies that such models can provide interesting new insights into this musical art. In this presentation, I will review recent work that has carried  this research further. I will mostly focus on a general modelling framework known as the "Basis Mixer", and show various extensions of this model that have gradually increased the modelling power of the framework. However, it will also become apparent that are still serious limitations and obstacles on the path to comprehensive models of musical expressivity, and I will briefly report on a new ERC project entitled "Con Espressione", which expressly addresses these challenges. Along the way, we will also hear about a recent musical "Turing Test" that is said to demonstrate that computational performance models have now reached a level where their interpretations of classical piano music are qualitatively indistinguishable from true human performances -- a story that I will quickly try to put into perspective ...
10:30 Tillman Weyde (City, University of London, UK)
"Digital Musicology with Large Datasets"
The increasing availability of music data as well as networks and computing resources has the potential to profoundly change the methodology of musicological research towards a more data-driven empirical approach. However, many questions are still unanswered regarding the technology, data collection and provision, metadata, analysis methods and legal aspects. This talk will report on an effort to address these questions in the Digital Music Lab project, and present achieved outcomes, lessons learnt and challenges that emerged in this process. 
11:30 Coffee break
12:00 Anja Volk (Utrecht University, Netherlands)
"The explication of musical knowledge through automatic pattern finding"
In this talk I will discuss the role of computational modeling for gaining insights into the specifics of a musical style for which there exists no long-standing music theory such as in Western classical music, Carnatic music or Ottoman-Turkish makam music. Specifically, I address the role of automatic pattern search in enabling us to scrutinize what it is that we really know about a specific music style, if we consider ourselves to be musical experts. I elaborate my hypothesis that musical knowledge is often implicit, while computation enables us to make part of this knowledge explicit and evaluate it on a data set. This talk will address the explication of musical knowledge for the question as to when we perceive two folk melodies to be variants of each other for the case of Dutch and Irish folk songs, and when we consider a piece to be a ragtime. With examples from research within my VIDI-project MUSIVA on patterns in these musical styles, I discuss how musical experts and non-experts working together on developing computational methods can gain important insights into the specifics of a musical style, and the implicit knowledge of musical experts. 
13:00 György Fazekas (Queen Mary, University of London, UK)
"Convergence of Technologies to Connect Audio with Meaning: from Semantic Web Ontologies to Semantic Audio Production”
Science and technology plays in an increasingly vital role in how we experience, how we compose, perform, share and enjoy musical audio. The invention of recording in the late 19th century is a profound example that, for the first time in human history, disconnected music performance from listening and gave rise to a new industry as well as new fields of scientific investigation. But musical experience is not just about listening. Human minds make sense of what we hear by categorising and by making associations, cognitive processes which give rise to meaning or influence our mood. Perhaps the next revolution akin to recording is therefore in audio semantics. Technologies that mimic our abilities and enable interaction with audio on human terms are already changing the way we experience it. The emerging field of Semantic Audio is in the confluence of several key fields, namely, signal processing, machine learning and Semantic Web ontologies that enable knowledge representation and logic-based inference. In my talk, I will put forward that synergies between these fields provide a fruitful way, if not necessary to account for human interpretation of sound. I will outline music and audio related ontologies and ontology based systems that found applications on the Semantic Web, as well as intelligent audio production tools that enable linking musical concepts with signal processing parameters in audio systems. I will outline my recent work demonstrating how web technologies may be used to create interactive performance systems that enable mood-based audience-performer communication and how semantic audio technologies enable us to link social tags and audio features to better understand the relationship between music and emotions. I will hint at how some principles used in my research also contribute to enhancing scientific protocols, ease experimentation and facilitate reproducibility. Finally, I will discuss challenges in fusing audio and semantic technologies and outline some future opportunities they may bring about.
1 Feb 2017 - 13:35 | view
Tutorial - Natural Language Processing for Music Information Retrieval
30 Jan 2017

In this tutorial, we will focus on linguistic, semantic and statistical-­based approaches to extract and formalize knowledge about music from naturally occurring text. We propose to provide the audience with a preliminary introduction to NLP, covering its main tasks along with the state­-of-­the-­art and most recent developments. In addition, we will showcase the main challenges that the music domain poses to the different NLP tasks, and the already developed methodologies for leveraging them in MIR and musicological applications.

  • Date: January 30th 2017. 14:30h - 17:30h
  • Location: Poblenou Campus, UPF (Roc Boronat 138, Barcelona). Room 52.S27
  • Tutorial presenters: Sergio Oramas, Luis Espinosa (Music meets NLP MdM project)

Updated version of the tutorial presented at ISMIR2016.

Free registration here.

24 Jan 2017 - 10:22 | view
Carolina Foundation: scholarship program (2017-2018) for ibero-american students

The Carolina Foundation launches a new fellowship program (2017-2018) for ibero-american students aim to complete their education in Spain. This program will offer 521 scholarships.

If you are interested to apply, find more information on the ETIC website.

11 Jan 2017 - 16:10 | view
Music Technology Group - report 2016

This year 2016 the MTG has been involved in a significant number of projects and activities, and its members have been very active in promoting the reasearch through outreach activties, publications and conferences. The following report presents some relevant indicators that reflect the overall activity of the group and resources during 2016. This report is in line with our open data and transparency policy.


MTG members

Faculty 4
Postdoc 17
PhD students 20
Master student internships 6
Developers 7
Administration 2
Others 3
Visitors 8

Total members 2016 (excluding visitors) = 59 people

MTG members 2012 to 2016


Revenue for competitive projects

Total revenue for public funded competitive projects 2016 = 1.543.747€

Revenue for competitive projects 2012 to 2016


Research and innovation projects

European projects National projects Private company projects
9 3 2

AudioCommons, CAMUT, CompMusic, Giant Steps, MusicBricks, MUSMAP, Phenicx, Rapid-Mix, TELMI

CASAS, Mingus, Timul

Korg, Yamaha







Total projects 2016 = 14

Projects by category 2012 to 2016



PhD thesis:
10 thesis defenses during 2016
(cumulative 44 thesis)
85 publications during 2016
(cumulative 1.148)
Participation in 23 different conferences:
Outreach activities:
Participation in more than 15 outreach activities open to students, professional audience or general society, including, amongst others, Festa de la Ciència, Setmana de la Ciència, Music Tech Fest, Pint of Science, Sonar festival, Mutek festival and organization of several public events.
Award from the Board of Trustees of the UPF in Knowledge Transfer category: S. Gulati, G. Koduri
Singing voice challenge, Interspeech: J. Bonada, M. Blauw
Best paper award, FMA: G. Dzhambazov, Y. Yang, R. Caro, X. Serra
Best paper award, NIME: C. O Nuanáin, S. Jordà, P. Herrera
Best paper award, CBMI: J. Pons, T. Lidy, X. Serra
22 Dec 2016 - 14:25 | view
Post-doctoral opportunities at the MTG

There are a number of possibilities to do a post-doc at the MTG, in particular:

1. Ramon y Cajal 2016. Post-doctoral positions funded by the Spanish government with which you can join a spanish research group like the MTG. For information and application:


2. Juan de la Cierva 2016. Post-doctoral positions for young doctors funded by the Spanish government with which you can join a Spanish research group like the MTG. For information and application:


3. Tenure-track position in Computer Science in the framework of the Maria de Maeztu Research Program of the Department of Information and Communication Technologies (DICT). For information and application:


4. Senior faculty position in Computer Science in the framework of the Maria de Maeztu Research Program of the Department of Information and Communication Technologies (DICT). For information and application:


7 Dec 2016 - 13:16 | view
Application open for the Master in Sound and Music Computing 2017-2018
28 Nov 2016 - 1 Jun 2017

The application for the Master in Sound and Music Computing, program 2017-2018, is open on-line. There are 4 application periods (deadlines: January 16th, March 10th, April 28th, June 1st). For more information on the UPF master programs and on how to register to the SMC Master check here. For other information on the SMC master check:

5 Dec 2016 - 11:01 | view
Possibility for 3 years postdocs @MTG for researchers outside Spain

The catalan government is opening a call for post-doc researchers to join catalan universities. It is called Beatriu de Pinós program.


  • Have a PhD between 01/01/2009 and 31/12/2014 (even later)
  • Minimum of 2 years of postdoctoral experience outside Spain.
  • Not living in Spain more than 12 months in the lsat 3 years.


  • 2 years duration that can be extended 1 more year. Starting before January 1st 2018.
  • 32.800 EUR / year + 6.000 EUR for supporting research

Deadline: 01/12/2016

More info here.

21 Nov 2016 - 13:33 | view
Master thesis from SMC Master 2015-2016
15 Nov 2016 - 11:02 | view
Talks by Dr. Eita Nakamura and Dr. Shinji Sako
15 Nov 2016

Dr. Eita Nakamura (Kyoto University, Japan) and Dr. Shinji Sako (Nagoya Institute of Technology, Japan)
will be giving two talks:


"Rhythm Transcription of Piano Performances Based on Hierarchical. Bayesian Modelling of Repetition and Modification of Musical Note Patterns" by Dr. Eita Nakamura. Kyoto University, Japan. (15h Nov, 17:00h. Room 52.321)

We present a method of rhythm transcription (i.e., automatic recognition of note values in music performance signals) based on a Bayesian music language model that describes the repetitive structure of musical notes. Conventionally, music language models for music transcription are trained with a dataset of musical pieces. Because typical musical pieces have repetitions consisting of a limited number of note patterns, better models fitting individual pieces could be obtained by inducing compact grammars. The main challenges are inducing appropriate grammar for a score that is observed indirectly through a performance and capturing incomplete repetitions, which can be represented as repetitions with modifications. We propose a hierarchical Bayesian model in which the generation of a language model is described with a Dirichlet process and the production of musical notes is described with a hierarchical hidden Markov model (HMM) that incorporates the process of modifying note patterns. We derive an efficient algorithm based on Gibbs sampling for simultaneously inferring from a performance signal the score and the individual language model behind it. Evaluations showed that the proposed model outperformed previously studied HMM-based models.


"Real-time audio-to-score following and its applications" by Dr. Shinji Sako (and his students). Nagoya Institute of Technology, Japan. (15th Nov, 17:45 h. Room 52.321)

We present a robust on-line algorithm for real-time audio-to-score following based on a delayed decision and anticipation framework. We employ Segmental Conditional Random Fields and Linear Dynamical System to model musical performance by human. The combination of these models allows an efficient iterative decoding of score position and tempo. The combined advantages of our approach are the delayed-decision
Viterbi algorithm which utilizes future information to determine past score position with high reliability, thus improving alignment accuracy, and the fact that the future position can be anticipated using an adaptively estimated tempo. We also talk about interim progress of the research and some applications by using this

15 Nov 2016 - 10:09 | view
Seminar on music knowledge extraction using machine learning
4 Dec 2016

Taking advantage of the researchers coming to Barcelona for the NIPS conference, on December 4th we are organizing a small and informal seminar to discuss on various topics related to machine learning applied to music, putting special emphasis on the knowledge extraction aspects of it.

Full program:

10 Nov 2016 - 19:02 | view