News and Events

Participation of the MTG at Primavera Pro

The MTG participates next week at Primavera Pro, the professional section of Primavera Sound Festival in Barcelona, with two talks. Sergio Oramas and Frederic Font will present topics related with their research to the audience of the festival.

Sergio Oramas
Millions of people are using streaming services and this Big Data is an ideal fuel for Artificial Intelligence systems, as well as for of music recommendation systems. These systems work very well for relatively popular artists, but what happens to a band that is new or has very few followers? During this presentation we will show the opportunities offered by Artificial Intelligence offers and will reveal the initiatives that the industry is taking in this domain.
Frederic Font
Creative Commons offers a framework with which independent music artists, sound designers and other sound creators can release their music and sound effects under clear terms. In this talk we will explain the reasons why such audio content is not yet extensively used in the professional sector and possible solutions based on the work done in the AudioCommons project.


25 May 2017 - 14:15 | view
Participation of TELMI project in the Festa de la Ciencia

On May 27th at 7:20PM, TELMI project will be presented in the Festa de la Ciencia organised by the Barcelona town hall at Parc de la Ciutadella.

The presentation will be in the frame of Microxerrades (microtalks) section under the title Tecnologia per aprendre música.

This is the 11th edition of this festival that aims at disseminating the scientific knowledge and the technology innovation. The festival is free and open to the general public.

Photo: BarcelonaCiencia

23 May 2017 - 18:14 | view
Participation of AudioCommons project in a panel at Sonar+D

AudioCommons project will take part of the panel Creative Commons for the Creative Industries on June 15th, 3PM at Sonar+D.

In this panel we will discuss about different perspectives and specific examples that provide a vision on how Creative Commons content can be used by creative industries, create economical return for content creators and how to address specific legal aspects. The panel will be organized around questions previously submitted by the audience. Those questions can be posted on twitter with the hashtag #ACSonarPlusD. All the information can be found on the AudioCommons website.

Panel will be composed by the members from the Audio Commons consortium and external professionals:

Malcolm Bain is founding partner of id law partners, and he is specialized on the legal issues of open source software and content, including both developing and freeing software, establishing licensing strategies and IPR enforcement. Malcolm is member of the Free Software Foundation Europe, and FreeSoftware Chair of the Universitat Politécnica de Catalunya.

Emmanuel Donati is CEO at Jamendo, one of the biggest platforms for free independent music. He is in charge of a catalog of 600,000 tracks shared by 40,000 from all over the world, and works on various aspects of the strategy to make independent music more accessible and bring an alternative business model for musicians.

Roger Subirana is a composer and music producer that, apart from his personal compositions, creates music for cinema, tv, theatre, several audiovisual projects and advertisements. His work under Creative Commons and this fact has facilitated his international recognition and the possibility to license his work for important commercial brands and movies. He is one of the most successful artists in Jamendo platform, having more that 900.000 downloads and 6.5 million listens.

Frederic Font is a post-doc researcher at the Music TechnologyGroup of the Department of Information and Communication Technologies of Universitat Pompeu Fabra, Barcelona. His current research is focused on facilitating the reuse of audio content in music creation and audio production contexts. Complementarily to his research, Frederic is leading the development of Freesound and coordinating the EU funded Audio Commons Initiative.

22 May 2017 - 15:01 | view
New project to exploit music education technologies
The European Research Council has awarded Xavier Serra a Proof of Concept grant to complement the existing ERC Advanced Grant on the CompMusic project. This new project will be dedicated to promote the exploitation of a number of technologies that can support on-line music education.
The TECSOME project will develop an exploitable system to facilitate the assessment of the music exercises submitted by music students taking on-line courses. The system will integrate technologies developed within the CompMusic project that measure the similarity between musical audio recordings, and with it, there will be an effort to define a market strategy to exploit it. The goal is to develop an approach with which to scale up music performance courses to MOOC level.
Proof of Concept grants, worth €150,000 each and open to ERC grant holders, can be used to establish intellectual property rights, investigate business opportunities or conduct technical validation. Xavier Serra already got a Proof of Concept grant in 2015, CAMUT, in that case, to exploit other CompMusic results for the particular case of Indian music. 
The CompMusic project, funded with an Advanced grant of the ERC in 2010 will finish in June 2017. In this project a group of researchers led by Xavier Serra have worked on the automatic description of music by emphasizing cultural specificity, carrying research within the field of music information processing with a domain knowledge approach. They have developed information modelling techniques of relevance to several non-Western music cultures, contributing to the overall field of Music Information Retrieval and of relevance to music exploration and education. TECSOME is a natural step in the technology transfer goals of the CompMusic project.
19 May 2017 - 11:30 | view
Web application developer position at the MTG

The MTG is looking for a web application developer position to work within the EU-funded project RAPID-MIX.

Job description:
The selected candidate will be working on an online repository for multimodal data, assisting in its development as well as preparing application prototypes and demos in conjunction with the RAPID-MIX API.

• Back-end web development (Python, Flask, Docker, PostgreSQL)
• Some front-end development (Javascript, D3)
• Fluent in english (written and spoken)
• C++ experience, as well as experience working with sound and music technology is a plus

Starting date: immediate
Dedication: Full time (3 months) / part time (6 months)

How to apply:
Interested candidates should send a resume as well as a brief motivation letter addressed to Panos Papiotis (panos [dot] papiotis [at] upf [dot] edu).

16 May 2017 - 14:01 | view
Three MIR talks by researchers from McGill
22 May 2017
Gabriel Vigliensoni, Martha Thomae, and Jorge Calvo-Zaragoza, from McGill University, Canada, will present their research on Monday, May 22nd, at 3:30pm in room 55.309.
Gabriel Vigliensoni
Title: A case study with the Music Listening Histories Dataset: Do demographic, profiling, and listening context features improve the performance of automatic music recommendation systems?
Abstract: Digital music services provide us with real-time access to millions of songs. Automatic music recommendation systems offer us new ways to discover music. The systems, however, do not account for the context of music listening. The function of music in everyday life depends on the context of music listening. Incorporating information about people’s music listening habits can be used to improve the recommendations. During the discussion, I present my research on collecting music listening histories spanning half a million users, and I explain how insights generated from the data can improve prediction accuracy of a music recommendation model. 
Martha Thomae
Title: A Methodology for Encoding Mensural Music: Introducing the Mensural MEI Translator
Abstract: Polyphonic music from the Late Middle Ages (thirteenth century) and the Renaissance (fourteenth and fifteenth centuries) was written in mensural notation, a system of notation characterized by note durations that are context-dependent. Efforts have been made to encode this music in a machine-readable format, with the goal of preserving the repertoire in its original notation while still allowing for computational musical analysis. There are only a few formats that provide support for encoding this old system of notation, one of these formats is MEI (Music Encoding Initiative). Due to the inefficiency of hand coding music in general, and the added complication in mensural notation of interpreting the value of the notes while coding, we propose a methodology to facilitate this task of encoding the music into a Mensural MEI file through a tool we developed called the Mensural MEI Translator. The methodology allows the musicologist to enter the piece through a score-editor, instead of directly encoding it into a Mensural MEI file. Through a series of processes, this file is converted into a Mensural MEI file that encodes the piece in the original (mensural) notation. 
Jorge Calvo-Zaragoza
Title: Document Analysis for Music Scores with Deep Learning
Abstract: Content within musical documents is not restricted to notes but involves heterogeneous information such as symbols, text, staff lines, ornaments or annotations. Before any attempt at automatically recognizing the information on the scores with an Optical Music Recognition system, it is necessary to detect and classify each constituent layer of information into different categories. The greatest obstacle of this classification process is the high heterogeneity among music collections, which makes it difficult to propose methods that can be generalizable to a broad range of sources. This presentation discusses a data-driven document analysis framework based on the use of Deep Learning methods, namely Convolutional Neural Networks. It focuses on extracting the different layers within musical documents by categorizing the image at pixel level. 
The main advantage of the approach is that it can be used regardless of the type of document provided, as long as training data is available. We illustrate some of the capabilities of the framework by showing examples of common tasks that are frequently performed on images of musical documents. We believe that this framework will allow the development of generalizable and scalable automatic music recognition systems, thus facilitating the creation of large-scale browsable and searchable repositories of music documents.
16 May 2017 - 08:32 | view
2nd International Workshop on Quantitative and Qualitative Music Therapy Research
26 May 2017

Universitat Pompeu Fabra
Room 55.309 (located at Tanger Building)
Roc Boronat 138
08018 Barcelona

Music is known to have the power to induce strong emotions and physiological changes. Musical activities have a positive impact in the perception of quality of life and may even improve cognitive, social and emotional abilities. it is not surprising that a variety of clinical conditions are often treated with music therapy. Large scale studies have shown that music therapy produces significant improvements in social behaviors, overt behaviors, reductions in agitated behaviors, and improvements to cognitive problems, However, the positive effects of music therapy are not homogeneous among all studies, and there is often a lack of formal research involving quantitative and qualitative methods to assess the benefits and limitations of music therapy in concrete treatments. A special topic in this year's workshop is Accessible Music Interfaces as a means to allow people with disabilities to perform and compose music. A concert involving several accessible music interfaces is planed as part of the workshop.

Workshop aims
The aim of the workshop is to promote fruitful collaboration among researchers, music therapists, musicians, psychologists and physicians who are interested in music therapy and its effects, evaluated by applying quantitative and qualitative methods. The workshop will provide the opportunity to learn about, present and discuss ongoing work in the area. We believe that this is a timely workshop because there is an increasing interest in quantitative and qualitative methods in music therapy.

The workshop will feature paper presentations and open discussions. The accepted contributions will be available from the workshop web page as soon as possible in order to encourage active discussion during the workshop. At the end of each paper session there will be time allocated for discussion. Each discussion will initially be focused on the research reported by the session contributions, and then generalized to the session general topic. At the end of the workshop there will be a dedicated session to discuss about the perspectives and future directions of quantitative and qualitative music therapy.

15 May 2017 - 14:21 | view
Seminar and concert by Leonello Tarabella on the composer Pietro Grossi
18 May 2017

On Thursday May 18th, there will be two events dedicated to the Italian composer Pietro Grossi. At 15:30 in room 55.309, Leonello Tarabella gives a seminar on "The experience on computer music of Pietro Grossi in Pisa, Italy, in the centenary of his birth". At 19:30 there will be a concert in the Sala Polivanent including acousmatic works by Pietro Grossi and audiovisual interactive works by Leonello Tarabella.

Abstract of seminar:
I hereby report the approach of M° Pietro Grossi in using computers of the ‘70s and ‘80s for producing music.
Due to the low power of computers of those years (clock at 10 MHz and RAM memory of not more than 1Mbyte at the best), it was impossible to synthesize in real time audio signals of some relevance. The technique commonly used was the so called “offline” synthesis: this means that the computed samples of an audio signal were first cumulated sequentially on mass memory supports such as magnetic tapes or disks and then, in order to get sound, back-read and sent at the proper sampling rate to a Digital-to-Analog converter. As a result, depending on the complexity of the sound synthesis model and the number of voices of a composition, the process required even hours of computation for few minutes of music.
The philosophy of M.Grossi was that of “real-time”. For that, the special device “TAU2 audio terminal” was developed in Pisa at CNUCE and IEI, two institutes of C.N.R. (National Research Counsel) deeply active on researches on computer science.
Based on a hybrid electronic architecture (digital in control - analog in synthesis) TAU2 was able to play with polyphony of 12 voices under the control of a mainframe 370/68 IBM computer.
Once the TAU2 was assembled and put at work by the many engineers and technicians of both the CNUCE and IEI institutes, M.Grossi developed by himself the music programs writing down hundreds and hundreds of code lines in FORTRAN language, in those years the most popular language used by the scientific community.

Program of concert:
Pietro Grossi's acousmatic works: "Computer Music", "Paganini al Computer", "BACH/Grossi", "SATIE-JOPLIN-GROSSI", "SOUND LIFE".
Leonello Tarabella interactive works: "Strip lines", "Dance&Fluid", "Masses", "LHC", "Jacaranda", "Sound carrier", "Serenade".

more info on concert: concert program

15 May 2017 - 09:59 | view
Seminar on Sound Interaction and Online Communities at the Biennale di Venezia

In the frame of the artistic project 'La Venezia che non si vede. Unveiling the Unseen’, that is part of the Biennale di Venezia, on May 15th to 16th will take place an international seminar called "Cartographies of the unseen". The MTG will participate in the seminar with a session about Sound Interaction and Online Communities that will be conducted by Frederic Font.

11 May 2017 - 17:59 | view
Artistic residence for TELMI project

TELMI project participates to the VERTIGO STARTS Artistic Residencies Program in order to host an artist that will work at the MTG using the technologies of TELMI for a creative outcome. The collaboration with the artist will extend the dissemination of the technologies developed in the project by working and collaborating in an artistic scenario.

VERTIGO is a project supported by H2020 program that aims to catalyze new synergies between artists, cultural institutions, R&D projects in information and communication technologies (ICT), companies, incubators and funds. The call for artists is open until May 22nd, 2017. Find more information in VERTIGO website.


3 May 2017 - 14:42 | view