News and Events

Sergio Giraldo defends his PhD thesis on September 16th
16 Sep 2016
Sergio GIraldo defends his PhD thesis entitled "Computational Modelling of Expressive Music Performance in Jazz Guitar: A Machine Learning Approach" on Friday September 16th 2016 at 15:00h in room 55.309 of the Communication Campus of the UPF.

The jury of the defense is: Jose Manuel Iñesta (Alicante University), Hendrik Purwins (Aalborg University), Enric Guaus (UPF)

Thesis abstract:
Computational modelling of expressive music performance deals with the analysis and characterization of performance deviations from the score that a musician may introduce when playing a piece in order to add expression. Most of the work in expressive performance analysis has focused on expressive duration and energy transformations, and has been mainly conducted in the context of classical piano music. However, relatively little work has been dedicated to study expression in popular music where expressive performance involves other kinds of transformations. For instance in jazz mu- sic, ornamentation is an important part of expressive performance but is seldom indicated in the score, i.e. it is up to the interpreter to decide how to ornament a piece based on the melodic, harmonic and rhythmic contexts, as well as on his/her musical background. In this dissertation we investigate the computational modelling of expressive music performance in jazz music, using the guitar as a case study. High-level features are extracted from music scores, and expressive transformations (including timing, energy and ornamentation transformations) are obtained from the corresponding audio recordings. Once each note is characterized by its musical context description and expressive deviations, several machine learning techniques are explored to induce both, black-box and interpretable rule-based predictive models for duration, onset, dynamics and ornamentation transformations. The models are used to both, render expressive performances of new pieces, and attempt to understand expressive performance. We report on the relative importance of the considered music features, quantitatively evaluate the accuracy of the induced models, and discuss some of the learnt expressive performance rules. Furthermore, we present different approaches to semi-automatic data extraction-analysis, as well as some applications in other research fields. The findings, methods, data extracted, and libraries developed for this work are a contribution to expressive music performance field.
15 Sep 2016 - 10:37 | view
Technology Transfer position at the MTG
The Music Technology Group (MTG) of the Universitat Pompeu Fabra, Barcelona (http://mtg.upf.edu) invites applications for a tech transfer position.
 
The MTG is a research group specialized in sound and music computing committed to have social impact and with a strong focus on technology transfer activities. The MTG has created several spin-off companies, is active in licensing technologies, collaborates and has contracts with a number of companies, and develops and maintains open software and collaborative based technologies, like Essentia or Freesound, that are exploited in industrial contexts.
 
Successful candidates for this position should be experienced researchers with a motivation and experience on technology transfer wanting to take a leading role in promoting technology transfer initiatives within the music sector.
 
Responsibilities:
Responsible for driving all technology transfer processes to resolution, from market prospection, preparation of internal results for exploitation and the negotiation and follow-up of contracts / license agreements with external customers and the university. Specifically to:
  • Promote the existing technologies and those resulting from our ongoing research projects.
  • Understand the market needs and identify potentially interested partnerships/ customers.
  • Collaborate with the researchers in the preparation of the technologies to be ready for the exploitation.
  • Actively look for tech transfer opportunities.
  • Manage technology transfer relationships (negotiations, licensing agreements).
Requirements:
  • PhD or comparable research experience.
  • Experience in applied research and / or technology transfer activities (at least 1 year).
  • Marketing skills and knowledge of basic business and Intellectual Property topics.
  • Understanding the industrial sectors related to Music Technology.
  • Fluent in English and Spanish.
  • It is desirable that the candidate has worked outside Spain at least 2 years in the last 3 years.

Interested candidates should send a resume as well as a motivation letter, addressed to Xavier Serra, to mtg-info [at] upf [dot] edu (subject: tech%20transfer%20position) before October 15th.

 
14 Sep 2016 - 15:32 | view
Participation to VSGAMES 2016

Some prototypes were presented at VSGAMES 2016 - 8th International Conference on Virtual Worlds and Games for Serious Applications by Álvaro Sarasúa and Jordi Janer, members of the MIR-lab@MTG.

These games where developed in the context of the PHENICX project and with the goal of interacting with classical music concerts.

Janer, J., Gómez E., Martorell A., Miron M., & de Wit B. (2016). Immersive Orchestras: audio processing for orchestral music VR content. VSGAMES 2016 - 8th International Conference on Virtual Worlds and Games for Serious Applications. Abstract

Sarasúa, Á., Melenhorst M., Julià C. F., & Gómez E. (2016). Becoming the Maestro - A Game to Enhance Curiosity for Classical Music. 8th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2016).

 

12 Sep 2016 - 09:42 | view
Participation to SMC 2016
Olga Slizovskaia, Sertan Şentürk, Rong Gong and Juanjo Bosch will participate to the 13th Sound and Music Computing Conference that takes place in Hamburg from August 31st to September 3rd 2016. They will be presenting the following papers:
29 Aug 2016 - 10:35 | view
Talk on factor analysis for audio classification tasks by Hamid Eghbal-zadeh
1 Aug 2016
On Monday, 1st of August at 15:00h in room 55.410 there will be a talk by Hamid Eghbal-zadeh (Department of Computational Perception, Johannes Kepler University of Linz, Austria) on "A small footprint for audio and music classification".
 
Abstract: In many audio and music classification tasks, the aim is to provide a low-dimensional representation for audio excerpts with a high discrimination power to be used as excerpt-level features instead of the audio feature sequence. One approach would be to summarize the acoustic features into a statistical representation and use it for classification purposes. A problem of many of the statistical features such as adapted GMMs is that they are very high dimensional and also capture unwanted characteristics about the audio excerpts which does not represent their class. Using Factor Analysis, the dimensionality can be dramatically reduced and the unwanted factors can be discarded from the statistical representations. The state-of-the-art in many speech-related tasks use a specific factor analysis to extract a small footprint from speech audios. This fixed-length low-dimensional representation is known as i-vector. I-vectors are recently imported in MIR and have shown a great promise. Recently, we won the Audio Scene Classification challenge (DCASE-2016) using i-vectors. Also, we will present our noise-robust music artist recognition system via i-vector features at ISMIR-2016.
28 Jul 2016 - 16:00 | view
Large participation of the MTG at ISMIR 2016
16 MTG researchers participate to the 17th International Society for Music Information Retrieval Conference (ISMIR 2016) that takes place in New York from August 7th to the 11th 2016. ISMIR is the world’s leading research forum on processing, searching, organizing and accessing music-related data. MTG's main contributions are the presentations of 11 papers in the main program, 2 tutorials, and 2 papers in the satellite workshop DLFM 2016.
 
Here are the papers presented as part of the main program:
 
 
Here are the tutorials that MTG people are organizing and involved in:
27 Jul 2016 - 10:47 | view
Korg releases a new Tuner with the collaboration of the MTG
Korg has announced the TM-50TR, a Tuner / Metronome / Tone Trainer device that detects not only the pitch, but also the volume and tone of the sound as a performer plays. The Tone Trainer function is based on KORG's new ARTISTRY technology. This is proprietary technology for analyzing and evaluating sound that was developed through cooperative research under the supervision of Xavier Serra, Director of the Music Technology Group at the Pompeu Fabra University in Barcelona, Spain. 
 
In addition to its high precision as a tuner, the TM-50TR features a new "Tone Trainer" function that can evaluate the players sound in even greater detail. When the performer plays a sustained note on her instrument, the TM-50TR will detect not only the pitch, but also the dynamics (volume) and brightness (tonal character). These three elements are displayed in the TM-50TR’s meter in real time. When the performer finishes playing the note, the stability of each of these three elements is shown in a graph, allowing you to see at a glance whether your sound is stable. 
 
By analyzing these three basic elements of sound, including tuning, the TM-50TR can identify which aspects of the performers playing need improvement, thus helping you practice more efficiently. 
tm-50tr
25 Jul 2016 - 09:44 | view
Best paper award at NIME 2016

A paper presented by MTG researchers (Cárthach Ó Nuanáin, Sergi Jordà & Perfecto Herrera) has received the "best paper award" in the 16th International Conference on New Interfaces for Musical Expression, one of the most relevant and influential in the area of music technology, which was held recently in Brisbane, Australia.

The paper "An Interactive Software Instrument for Real-time Rhythmic Concatenative Synthesis" describes an approach for generating and visualising new rhythmic patterns from existing audio in real-time using concatenative synthesis. A graph-based model enables a novel 2-dimensional visualisation and manipulation of new patterns that mimic the rhythmic and timbral character of an existing target seed pattern. A VST audio plugin has been implemented using the reported research and has got positive acceptance not only in Brisbane's presentation but also in other non-academic meetings like Sonar+D and Music Tech Fest.

22 Jul 2016 - 15:29 | view
Keynote at IMS Conference 2016

Xavier Serra gives a keynote at the Conference of the International Musicological Society that takes place from July 1st to the 6th 2016 in Stavanger, Norway.

Title: The computational study of a musical culture through its digital traces

Abstract: From most musical cultures there are digital traces, digital artefacts, that can be processed and studied computationally and this has been the focus of computational musicology for already several decades. This type of research requires clear formalizations and some simplifications, for example, by considering that a musical culture can be conceptualized as a system of interconnected entities. A musician, an instrument, a performance, or a melodic motive, are examples of entities and they are linked through various types of relationships. We then need adequate digital traces of the entities, for example a textual description can be a useful trace of a musician and a recording a trace of a performance. The analytical study of these entities and of their interactions is accomplished by processing the digital traces and by generating mathematical representations and models of them. But a more ambitious goal is to go beyond the study of individual artefacts and analyze the overall system of interconnected entities in order to model a musical culture as a whole. The reader might think that this is science fiction, and she might be right, but there is research trying to advance in this direction. In this article we overview the challenges involved in this type of research and review some results obtained in various computational studies that we have carried out of several music cultures. In these studies, we have used audio signal processing, machine learning, and semantic web methodologies to describe various characteristics of the chosen musical cultures.
 
29 Jun 2016 - 23:09 | view
Best papers awards at FMA 2016 and at CBMI 2016

In the same week, two papers from the MTG obtained the best paper award in two conferences. Georgi Dzhambozov, first author, obtained the best paper award at FMA 2016 for the paper entitled "Automatic Alignment of Long Syllables In A cappella Beijing Opera".  Jordi Pons, first author, obtained the best paper award at CBMI 2016 for the paper entitled "Experimenting with Musically Motivated Convolutional Neural Networks".

24 Jun 2016 - 17:31 | view
intranet