- Slizovskaia, O., Gómez E., & Haro G. "Automatic musical instrument recognition in audiovisual recordings by combining image and audio classification strategies".
- Şentürk, S., & Serra X. "Composition Identification in Ottoman-Turkish Makam Music Using Transposition-Invariant Partial Audio-Score Alignment".
- Gong, R., Yang Y., & Serra X. "Pitch Contour Segmentation for Computer-aided Jingju Singing Training".
- Şentürk, S., Koduri G. K., & Serra X. "A Score-Informed Computational Description of Svaras Using a Statistical Model".
- Bosch, J., & Gómez, E. "Melody extraction based on a source-filter model using pitch contour selection"
News and Events
Participation to SMC 2016
Olga Slizovskaia, Sertan Şentürk, Rong Gong and Juanjo Bosch will participate to the 13th Sound and Music Computing Conference that takes place in Hamburg from August 31st to September 3rd 2016. They will be presenting the following papers:
Talk on factor analysis for audio classification tasks by Hamid Eghbal-zadeh
1 Aug 2016
On Monday, 1st of August at 15:00h in room 55.410 there will be a talk by Hamid Eghbal-zadeh (Department of Computational Perception, Johannes Kepler University of Linz, Austria) on "A small footprint for audio and music classification".
Abstract: In many audio and music classification tasks, the aim is to provide a low-dimensional representation for audio excerpts with a high discrimination power to be used as excerpt-level features instead of the audio feature sequence. One approach would be to summarize the acoustic features into a statistical representation and use it for classification purposes. A problem of many of the statistical features such as adapted GMMs is that they are very high dimensional and also capture unwanted characteristics about the audio excerpts which does not represent their class. Using Factor Analysis, the dimensionality can be dramatically reduced and the unwanted factors can be discarded from the statistical representations. The state-of-the-art in many speech-related tasks use a specific factor analysis to extract a small footprint from speech audios. This fixed-length low-dimensional representation is known as i-vector. I-vectors are recently imported in MIR and have shown a great promise. Recently, we won the Audio Scene Classification challenge (DCASE-2016) using i-vectors. Also, we will present our noise-robust music artist recognition system via i-vector features at ISMIR-2016.
Large participation of the MTG at ISMIR 2016
16 MTG researchers participate to the 17th International Society for Music Information Retrieval Conference (ISMIR 2016) that takes place in New York from August 7th to the 11th 2016. ISMIR is the world’s leading research forum on processing, searching, organizing and accessing music-related data. MTG's main contributions are the presentations of 11 papers in the main program, 2 tutorials, and 2 papers in the satellite workshop DLFM 2016.
Here are the papers presented as part of the main program:
Here are the papers presented at the 3rd International Digital Libraries for Musicology Workshop (DLFM 2016)
Here are the tutorials that MTG people are organizing and involved in:
Korg releases a new Tuner with the collaboration of the MTG
Korg has announced the TM-50TR, a Tuner / Metronome / Tone Trainer device that detects not only the pitch, but also the volume and tone of the sound as a performer plays. The Tone Trainer function is based on KORG's new ARTISTRY technology. This is proprietary technology for analyzing and evaluating sound that was developed through cooperative research under the supervision of Xavier Serra, Director of the Music Technology Group at the Pompeu Fabra University in Barcelona, Spain.
In addition to its high precision as a tuner, the TM-50TR features a new "Tone Trainer" function that can evaluate the players sound in even greater detail. When the performer plays a sustained note on her instrument, the TM-50TR will detect not only the pitch, but also the dynamics (volume) and brightness (tonal character). These three elements are displayed in the TM-50TR’s meter in real time. When the performer finishes playing the note, the stability of each of these three elements is shown in a graph, allowing you to see at a glance whether your sound is stable.
By analyzing these three basic elements of sound, including tuning, the TM-50TR can identify which aspects of the performers playing need improvement, thus helping you practice more efficiently.
Best paper award at NIME 2016
A paper presented by MTG researchers (Cárthach Ó Nuanáin, Sergi Jordà & Perfecto Herrera) has received the "best paper award" in the 16th International Conference on New Interfaces for Musical Expression, one of the most relevant and influential in the area of music technology, which was held recently in Brisbane, Australia.
The paper "An Interactive Software Instrument for Real-time Rhythmic Concatenative Synthesis" describes an approach for generating and visualising new rhythmic patterns from existing audio in real-time using concatenative synthesis. A graph-based model enables a novel 2-dimensional visualisation and manipulation of new patterns that mimic the rhythmic and timbral character of an existing target seed pattern. A VST audio plugin has been implemented using the reported research and has got positive acceptance not only in Brisbane's presentation but also in other non-academic meetings like Sonar+D and Music Tech Fest.
Keynote at IMS Conference 2016
Xavier Serra gives a keynote at the Conference of the International Musicological Society that takes place from July 1st to the 6th 2016 in Stavanger, Norway.
Title: The computational study of a musical culture through its digital traces
Abstract: From most musical cultures there are digital traces, digital artefacts, that can be processed and studied computationally and this has been the focus of computational musicology for already several decades. This type of research requires clear formalizations and some simplifications, for example, by considering that a musical culture can be conceptualized as a system of interconnected entities. A musician, an instrument, a performance, or a melodic motive, are examples of entities and they are linked through various types of relationships. We then need adequate digital traces of the entities, for example a textual description can be a useful trace of a musician and a recording a trace of a performance. The analytical study of these entities and of their interactions is accomplished by processing the digital traces and by generating mathematical representations and models of them. But a more ambitious goal is to go beyond the study of individual artefacts and analyze the overall system of interconnected entities in order to model a musical culture as a whole. The reader might think that this is science fiction, and she might be right, but there is research trying to advance in this direction. In this article we overview the challenges involved in this type of research and review some results obtained in various computational studies that we have carried out of several music cultures. In these studies, we have used audio signal processing, machine learning, and semantic web methodologies to describe various characteristics of the chosen musical cultures.
Best papers awards at FMA 2016 and at CBMI 2016
In the same week, two papers from the MTG obtained the best paper award in two conferences. Georgi Dzhambozov, first author, obtained the best paper award at FMA 2016 for the paper entitled "Automatic Alignment of Long Syllables In A cappella Beijing Opera". Jordi Pons, first author, obtained the best paper award at CBMI 2016 for the paper entitled "Experimenting with Musically Motivated Convolutional Neural Networks".
Participation to the Data-driven Knowledge Extraction Workshop at UPF
Several members of the MTG present their research projects at the next María de Maeztu DTIC-UPF Data-driven Knowledge Extraction Workshop that takes place at the UPF on June 28th-29th 2016. The workshop is open to the public, free registration at www.upf.edu/mdm-dtic.
Here are the presentations with MTG participation:
Participation to CBMI 2016
Jordi Pons participates to the 14th International Workshop on Content-based Multimedia Indexing (CBMI 2016) that takes place in Bucharest from June 15th to the 17th 2016. He is presenting the following article:
Participation to Sonar+D 2016
As in the past years, the MTG participates in the Sonar Festival that takes place from June 16th to 18th, 2016 specifically in its professional area Sonar+D.
Our participation this year is focused on the following activities:
Sonar Innovation Challenge:
After 5 years of successfully organising the Barcelona Music Hack Day (MHD) in collaboration with Sonar+D, the MTG is now pushing forward a new activity within the festival: the Sónar Innovation Challenge (SIC).
The SIC is a platform for the collaboration between innovative tech companies and creators (programmers, designers, artists) that aims to produce disruptive prototypes to be showcased in the main stage of Sonar+D. The interaction between companies and creators happens through challenges proposed by the companies themselves, seeking to boost the impact and visibility of the featured technologies motivated by the market needs for innovation. Challenges are not exclusively technology driven, but also driven by content or artistic motivation.
In this first edition, the SIC hosts 4 challenges: Extended electronic music festival experience (Absolut Labs), Interactive playlist based on crowd behaviour (Deezer), Expressive gaming through gesture interaction (RAPID-MIX) and Collective smartphone experience (RAPID-MIX and CoSiMa).
We were truly thrilled by the quantity (over 100) and quality of the applications we got in this first edition. The Sónar Innovation Challenge has been designed to attract creators with a wide variety of profiles and skills, and from this perspective the Open Call has been completely successful. There is a great balance of artists, coders, makers, designers, researchers… the perfect combination to form great multidisciplinary teams.
The SIC started with an online phase where each team of challengers collaborated over the internet together with the mentors of their challenge in order to define their roles in the team, describe the team’s proposed solution, create a first prototype and prepare a work plan for the 3 intensive days of the on-site phase of the SIC that will take place from June 15th to June 17th, with a kick-off meeting at IronHack, and two more days of intensive work before presenting the outcomes of each challenge at Sonar+D.
Giant Steps Booth at Market Lab area:
As part of the dissemination activities of Giant Steps project, several prototypes and products developed during the project will be demoed in a booth dedicated to Giant Steps at the Market Lab area.
The MTG will present the “House harmonic filler”, an expert agent for harmony specialized in House music, and “Drumming with Style”, an expert agent for the variation and generation of rhythmic patterns. Reactable Systems will introduce ROTOR, a new app which, by allowing the use of tangible control objects on capacitive screens, brings the unique tangible experience of the Reactable tabletop for the first time to the iPad, and RhytmCat, a concatenative synthesis VST plugin, developed in collaboration with the MTG. Native Instruments will present iMaschine2 for iOS and JKU will present “Rhythm Variation".
Users will be able to play with all these applications, which will run synchronized in a shared session.