News and Events

Participation to Cultural Innovation event

The MTG will participate in an event about cultural innovation organized by Culture Institute of Barcelona City Council, that will take place on March 10th 4PM at Canòdrom-Parque de Investigación Creativa (Barcelona).

Sonia Espí will introduce Freesound and participate in a debate about digital heritage.

 

8 Mar 2017 - 13:01 | view
Sónar Innovation Challenge 2017: Call for Creators
This year the MTG is once again collaborating with the Sónar+D festival to organize the Sónar Innovation Challenge (SIC).
 
SIC is a platform for the collaboration between innovative tech companies and the creative community (programmers, designers, artists) that aims to produce disruptive prototypes to be showcased in Sonar+D. The interaction between companies and creators happens through challenges proposed by the companies themselves. Challenges are open questions to the creative community, that can be approached from a technical and/or artistic perspective.
 
This year SIC proposes 6 challenges that span from intelligent DJ assistants and context-aware music flow, to VR, interface design and affective computing. The calendar of SIC:
  • March 6th - April 7th: Open Call for Creators. You can apply to one or more challenges from the website.
  • April 21st: Announcement of selected participants. A group of 5 creators will be assigned to each challenge. This means that you will work in the challenge collaborating with others.
  • May 1st - June 13th: You will have the chance to meet and work remotely with your team to plan the best solution for the challenge in collaboration with company mentors.
  • June 13th: Kick-off meeting @ Barcelona (place TBC)
  • June 14th-16th: working session & showcases @ Sonar+D (Barcelona)
  • June 17th: you will chill out and enjoy Sónar+D and Sónar Festival
The selection process will be based on the profile, motivation and previous experience of participants. Also, creators coming from abroad will receive a travel aid to help them cover mobility and accommodation expenses. 
 
7 Mar 2017 - 12:44 | view
Special Issue in IEEE Multimedia Magazine

Emilia Gómez has co-edited, in collaboration with Cynthia Liem (TU Delft, The Netherlands) and George Tzanetakis (University of Victoria, Canada), a Special Issue at IEEE Multimedia Magazine on the topic of Multimedia Technologies for Enriched Music Performance, Production, and Consumption. This special issue gathers state-of-the-art research on multimedia methods and technologies aimed at enriching music performance, production and consumption, and it is linked to the topic of the recently finished PHENICX project. This is the full reference to the editorial paper: 

Liem, C., Gómez, E., Tzanetakis, G. (2017), Multimedia Technologies  for Enriched Music Performance, Production and Consumption, IEEE Multimedia Magazine, 24(01), pp. 20-23. 

The full issue can be accessed here.

7 Mar 2017 - 11:52 | view
Participation to ICASSP 2017

Jordi Pons participates to the 42th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017) which takes place in New Orleans, USA, on March 5-9, 2017. He presents the following paper done in the context of the Maria de Maeztu DTIC strategic program.

Jordi Pons and Xavier Serra. "Designing Efficient Architectures for Modeling Temporal Features with Convolutional Neural Networks."

6 Mar 2017 - 10:41 | view
OVAL by OVAL SOUND
9 Mar 2017
Alex Posada, co-founder of Oval Sound, will give a talk on their music tech startup and the Oval, the first digital Handpan ever developed. The talk will take place on Thursday, March 9th 2017, at 12:00h in room 55.410 of the Communication Campus.
 
OVAL by OVAL SOUND
 
Founded in 2014 by Ravid Goldschmidt and Alex Posada in Barcelona (Spain), OVAL SOUND is the music technology startup behind Oval, the first digital Handpan ever developed.
 
The Oval is a new electronic musical instrument designed from the ground-up to revolutionize music creation, learning and live performance. This next generation instrument has sensitive multi-gesture sensors to combine the expressiveness of an acoustic instrument, with the flexibility of modern digital synthesis. To take full advantage of each subtle gesture, the Oval comes with its own multi-platform synthesis and effects app which when paired together, create a truly immersive playing experience.
The Oval app supports both iOS and Android, and comes with custom high- quality sound libraries ready to play. Or if you're feeling adventurous, you can connect the Oval to your favorite MIDI-enabled music apps, synths and even lighting rigs - the possibilities are endless.
 
The project was launched via a crowdfunding campaign on Kickstarter, and to this day remains one of the most successful campaigns on the platform in 2015.
 
2 Mar 2017 - 10:28 | view
Seminar by Geraint Wiggins on Computational Creativity
2 Mar 2017

Geraint A. Wiggins, from Queen Mary University of London, gives a seminar on "Creativity, deep symbolic learning, and the information dynamics of thinking" on Thursday, March 2nd 2017, at 15:30h in room 55.309 of the Communication Campus of the UPF.

Abstract: I present a hypothetical theory of cognition which is based on the principle that mind/brains are information processors and compressors, that are sensitive to certain measures of information content, as defined by Shannon (1948). The model is intended to help explicate processes of anticipatory and creative reasoning in humans and other higher animals. The model is motivated by the evolutionary value of prediction in information processing in an information-overloaded world.
The Information Dynamics of Thinking (IDyOT) model brings together symbolic and non-symbolic cognitive architectures, by combining sequential modelling with hierarchical symbolic memory, in which symbols are grounded by reference to their perceptual correlates. This is achieved by a process of chunking, based on boundary entropy, in which each segment of an input signal is broken into chunks, each of which corresponds with a single symbol in a higher level model. Each chunk corresponds with a temporal trajectory in the complex Hilbert space given by a spectral transformation of its signal; each symbol above each chunk corresponds with a point in a higher space which is in turn a spectral representation of the lower space. Norms in the spaces admit measures of similarity, which allow grouping of categories of symbol, so that similar chunks are associated with the same symbol. This chunking process recurses “up” IDyOT’s memory, so that representations become more and more abstract.
It is possible to construct a Markov Model along a layer of this model, or up or down between layers. Thus, predictions may be made from any part of the structure, more or less abstract, and it is in this capacity that IDyOT is claimed to model creativity, at multiple levels, from the construction of sentences in everyday speech to the improvisation of musical melodies.
IDyOT’s learning process is a kind of deep learning, but it differs from the more familiar neural network formulation because it includes symbols that are explicitly grounded in the learned input, and its answers will therefore be explicable in these terms.
In this talk, I will explain and motivate the design of IDyOT with reference to various different aspects of music, language and speech processing, and to animal behaviour.

24 Feb 2017 - 18:13 | view
YoMo: Presentation of TELMI project in the Mobile World Congress

On February 27th at 1PM we will be presenting TELMI project in the Youth Mobile Festival (YoMo) which is part of the GSMA Mobile World Congress. This festival is aimed at students from 10 to 16 years old, and it's expected to have 15.000 students visiting the festival.

The presentation will be focused on how interactive technologies can help in instrument learning, we will show some demos and the participants will be able to try some prototypes.

Find here more details of the activity:
http://www.mwcyomo.com/en/activities/music-technology-artificial-intelli...

21 Feb 2017 - 17:46 | view
Journal paper on the QUARTET dataset and the Repovizz system

Over the past few years, there has been an increasingly active discussion about publishing and accessing datasets for reuse in academic research. Although sometimes driven by concrete needs concerning a particular dataset or project, this topic is not accessory. In a data-driven research community like ours, it is very healthy to exchange ideas and perspectives on how to devise flexible means for making our data and results accessible--a valuable pursuit towards supporting research reproducibility.

The Music Technology Group of UPF hosts and provides free access to a number of datasets for music and audio research. As it normally happens with other published datasets, one needs to download the data files and procure means for exploring them locally. If limited to audio files or annotations, such process does not generally bring significant difficulties other than data volume. However, as the number and nature of modalities, extracted descriptors, and annotations increase (think of motion capture, video, physiological signals, time series of different sample rates, etc.), difficulties arise not only in the design or adoption of formatting schemes, but also in the availability of platforms that enable and facilitate exchange by providing simple ways to remotely visualize or explore the data before downloading.

In the context of several recent projects focused on music performance analysis and multimodal interaction, we had to collect, process, and annotate music performance recordings that included dense data streams of different modalities. Envisioning the future release of our dataset for the research community, we realized the need for better means to explore and exchange data. Since then, at UPF we have been developing Repovizz, a remote hosting platform for multimodal data storage, visualization, annotation, and selective retrieval via a web interface and a dedicated API.

By way of the recently published article E. Maestre, P. Papiotis, M. Marchini, Q. Llimona, O. Mayor, A. Pérez, M. Wanderley, Enriched Multimodal Representations of Music Performances: Online Access and Visualization IEEE MultiMedia, Vol 24:1, pp. 24-34, 2017, we introduce Repovizz to the MIR Community and open access to the QUARTET dataset, a fully annotated collection of string quartet multimodal recordings released through Repovizz.

For a short, unadorned video demonstrating Repovizz, please go to http://www.youtube.com/watch?v=JcHbGtltuG4. Although still under development, Repovizz can be used by anyone in the academic community.

The QUARTET dataset comprises 96 recordings of string quartet exercises involving solo and ensemble conditions, containing multichannel audio (ambient microphones and piezoelectric pickups), video, motion capture (optical and magnetic) of instrumental gestures and of musician upper bodies, computed bowing gesture signals, extracted audio descriptors, and multitrack score-performance alignment. The dataset, processed and curated over the past years partly in the context of the PhD dissertation work of Panos Papiotis on ensemble interdependence, is now freely available for the research community.

16 Feb 2017 - 16:15 | view
MUTEK'17: Artist in residence at MTG

As last years, we have stablished a collaboration with MUTEK festival to promote the use of our technologies within the artists community.

This year there is an open call for artists to create a work using technologies developed in the context of RAPID-MIX project.

The selected artist will be working at the MTG and will have the support from researchers during the residence. The final work will be presented as part of the program of MUTEK festival on March 9th at Mazda Space.

Open call: artist-in-residence mutek8

14 Feb 2017 - 18:14 | view
Journal paper on orchestral music source separation along with a new dataset
We are glad to announce the publishing of a journal paper on orchestral music source separation along with the PHENICX-Anechoic dataset. The methods were prototyped during the PHENICX project and were used for tasks as orchestra focus/instrument enhancement. To our knowledge, this is the first time source separation is objectively evaluated in such a complex scenario. 

M. Miron, J. Carabias-Orti, J. J. Bosch, E. Gómez and J. Janer, "Score-informed source separation for multi-channel orchestral recordings", Journal of Electrical and Computer Engineering (2016))"

Abstract: This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.

The PHENICX-Anechoic dataset includes audio and annotations useful for tasks as score-informed source separation, score following, multi-pitch estimation, transcription or instrument detection, in the context of symphonic music. This dataset is based on the anechoic recordings described in this paper:

Pätynen, J., Pulkki, V., and Lokki, T., "Anechoic recording system for symphony orchestra," Acta Acustica united with Acustica, vol. 94, nr. 6, pp. 856-865, November/December 2008.
 
For more information about the dataset and how to download you can access the PHENICX-Anechoic web page
14 Feb 2017 - 13:19 | view
intranet