News and Events

Participation to AES 53rd Conference on Semantic Audio

Frederic Font, Jordi Janer and Xavier Serra participate to the 53rd Conference on Semantic Audio of the Audio Engineering Society that takes place in London from January 26th  to the 29th, 2014.

Xavier has been invited to give a talk on CompMusic, entitled: "Creating Research Corpora for the Computational Study of Music: the case of the CompMusic Project", Frederic is giving a talk on his recent PhD research: "Audio clip classification using social tags and the effect of tag expansion", and Jordi presents a paper done with David S. Blancas on "Sound Retrieval from Voice Imitation Queries in Collaborative Databases".

21 Jan 2014 - 10:50 | view
TechTransfer position at the MTG through TECNIOspring

The MTG is part of a catalan initiative named TECNIO and through it there is a call for incorporating an experienced researcher interested in carrying out TechTransfer activities. 

TECNIOspring is a fellowship programme that provides financial support to individual mobility proposals presented by experienced researchers in liaison with a TECNIO centre (like our research group). Host institutions will offer fellows a stimulating and multidisciplinary scientific environment in which to develop their applied research projects with focus on technology transfer.

Fellows will be offered 2-year employment contracts in order to develop their applied research projects. Please note that this call presents a strong focus on TechTransfer, so candidates are required to have experience in applied research and / or technology transfer activities (at least 1 year). 

There are two types of fellowships:

  • Incoming - mobility for experienced researchers of any nationality willing to join our centre for 2 years. Candidates must hold a PhD and four additional years of full-time equivalent research experience; or eight years of full-time equivalent research experience.
  • Outgoing + return - Mobility outside Spain for experienced researchers of any nationality that reside in Catalonia willing to join a research or technology centre or R&D department of a private company for one year. This scheme will include areturn phase of one more year to the MTG. Candidates must hold a PhD.

Further details about the funding involved per fellowship, eligibility criteria and evaluation process are available in the programme leaflet. Those of you interesting in applying, please mtg [at] upf [dot] edu (subject: tecniospring) (send us )a briefing about the project you propose together with your CV.

 

14 Jan 2014 - 14:24 | view
Seminar by Julián Urbano on Evaluation in MIR
16 Jan 2014

Julián Urbano, postdoc at the MTG, will give a seminar on "Evaluation in (Music) Information Retrieval through the Audio Music Similarity task" on January 16th at 3:30pm in room 52.321.

Abstract: Test-collection based evaluation in (Music) Information Retrieval has been used for half a century now as the means to evaluate and compare retrieval techniques and advance the state of the art. However, this paradigm makes certain assumptions that remain a research problem and that may invalidate our experimental results. In this talk I will approach this paradigm as an estimator of certain probability distributions that describe the final user experience. These distributions are estimated with a test collection, computing system-related distributions assumed to reliably correlate with the target user-related distributions. Using the Audio Music Similarity task as an example, I will talk about issues with our current evaluation methods, the degree to which they are problematic, how to analyze them and improve the situation. In terms of validity, we will see how the measured system distributions correspond to the target user distributions, and how this correspondence affects the conclusions we draw from an experiment. In terms of reliability, we will discuss optimal characteristics of test collections and statistical procedures. In terms of efficiency, we discuss models and methods to greatly reduce the annotation cost of an evaluation experiment.

13 Jan 2014 - 17:35 | view
Maika the new vocaloid singer by Voctro Labs is now available!

Voctro Labs christmas' gift has arrived! MAIKA, the new female Vocaloid 3 Voice Library, is a virtual singer that allows you to create vocal parts on your computer without the need of recording a real singer. By simply entering melody, lyrics and expression parameters you'll be able to create lead vocals, vocal accompaniment, demo vocals, vocal effects; the possibilities are endless. MAIKA is designed to sing in Spanish, but contains a wide range of phonemes that will also cover parts of other languages like Portuguese, Italian, Catalan, English and Japanese.

MAIKA has a powerful feminine voice. In the lower registers she has a softer, more airy voice, while in the higher registers she has a more intense timbre. She has an extraordinarily broad pitch range, which switches from a chest voice to a head voice in the highest registers. This makes her voice suited for a large range of musical genres and styles.

You can directly download the edition or if you prefer you can also order the boxed limited edition from Voctro Labs' website.

20 Dec 2013 - 13:42 | view
Application open for Master and PhD programs of the UPF
19 Nov 2013 - 27 Jun 2014

From November 19th 2013 to June 27th 2014, the application is open for all the master's and doctoral programmes of the UPF for the 2014-2015 academic year.

For the Master in Sound and Music Computing you can find the information in here. To do a PhD at the MTG you have to enrol in the PhD program in Information and Communication Technologies and you can find the information in here

 

9 Dec 2013 - 00:29 | view
Introduction to music therapy and use of ICT in music therapy diagnostic
27 Nov 2013

Wednesday Nov 17th, 2013 in room 52.S29 (in front of the canteen) at 16h

Introduction to music therapy and use of ICT in music therapy diagnostic

ABSTRACT: During the last 50 years European and European influenced music therapy did big steps towards a fixed part in health and social services and sciences. Selected examples from clinical practice will give a brief overview about the scope of the very different application of music therapy approaches in very different fields of clinical practice. Moreover, one field of interdisciplinary research and development combining ICT and music therapy is the field of microanalyses in music therapy. So far qualitative research and standardized observations were used in the last 10 years for research of therapeutic processes in music therapy. All these methods are very time consuming. However, these methods are very important for diagnostic and assessment of music therapy. First developments, i.e. Music Therapy Toolbox on the basis of MIR, are very promising for future use as diagnostic tools in music therapy. The state of the art of this field will be presented and discussed.

BIO: Thomas Wosch is professor of music therapy at the university of applied sciences of Wuerzburg and Schweinfurt in Germany. He is director of Master in Music Therapy for clients with special needs and for clients with dementia. He is head of last year specialisation in music therapy in BA Social Work. He was 10 years music therapist in acute adult psychiatry with focus of treatment of schizophrenia, depression, anxiety disorders and borderline disorder. His special field of research are microanalyses in music therapy (measurement of minimal changes in music therapy processes; see also: Wosch & Wigram (2007) (eds.): Microanalysis in Music Therapy. London & Philadelphia: JKP.). He has research cooperation and international teaching all over Europe, in US, down under and South America. He is Co-editor of www.voice.no and of "Musik und Gesundsein" (music and health).

25 Nov 2013 - 12:14 | view
Tan Özaslan defends his PhD Thesis on November 29th
29 Nov 2013

Tan Özaslan defends his PhD thesis entitled "Computational Analysis of Expressivity in Classical Guitar Performances" on November 29th 2013 at 11:00h in room 52.429 of the Communication Campus of the UPF.

Thesis directors: Josep Lluís Arcos and Xavier Serra
Jury members: Ramon Lopez de Mantaras (IIIA-CSIC), Isabel Barbancho (Universidad de Málaga), Rafael Ramirez (UPF)

Abstract: The study of musical expressivity is an active field in sound and music computing. The research interest comes from different motivations: to understand or model musical expressivity; to identify the expressive resources that characterize an instrument, musical genre, or performer; or to build synthesis systems able to play expressively. To tackle this broad problem, researchers focus on specific instruments and/or musical styles. Hence, in this thesis we focused on the analysis of the expressivity in classical guitar and our aim is to model the use of expressive resources of the instrument. The foundations of all the methods used in this dissertation are based on techniques from the fields of information retrieval, machine learning, and signal processing. We combine several state of the art analysis algorithms in order to deal with modeling the use of the expressive resources. Classical guitar is an instrument characterized by the diversity of its timbral possibilities. Professional guitarists are able to convey a lot of nuances when playing a musical piece. This specific characteristic of classical guitar makes the expressive analysis laborious. In particular we divided our analysis into two main sections. First section provides a tool able to automatically identify expressive resources in the context of real recordings. We build a model in order analyze and automatically extract the tree most used expressive articulations, legato, glissando and vibrato. Second section provides an comprehensive analysis of timing deviations in classical guitar. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. Our system exploits feature extraction and machine learning techniques. Classification accuracies show that timing deviations are accurate predictors of the corresponding piece. To sum up, this dissertation contributes to the field of expressive analysis by providing, an automatic expressive articulation model and a musical piece prediction system by using timing deviations. Most importantly it analyzes the behavior of proposed models by using commercial recordings.

22 Nov 2013 - 18:19 | view
Seminar by Jean-Julien Aucouturier on spectro-temporal receptive fields for MIR
22 Nov 2013
Jean-Julien Aucouturier, from CNRS/IRCAM, gives a seminar on "Spectro-temporal receptive fields (STRFs): a biologically-plausible alternative to MFCCs?" on Friday November 22nd at 15:30h in room 55.410.

Abstract: We describe some recent experiments to adapt a recent computational model of the mammalian auditory cortex to the tasks of Music Information Retrieval. The model, called Spectro-temporal Receptive Fields (STRFs), simulates the responses of auditory cortical neurons as a filterbank of Gabor function tuned on frequencies, but also rates (temporal modulations in Hz) and scales (frequency modulations in cycle/octave). Off the shelf, it provides a 30,000 dimensional feature space; when these dimensions are integrated, we can derive novel signal representations/features that  (1) perform equivalently or better than e.g. Mel-Frequency Cepstrum Coefficients for a task of audio similarity, (2) are somewhat amusing (e.g. dynamic frequency wrapping instead of DTW), and (3) more plausible that the usual MIR features from a biological point of view. 

18 Nov 2013 - 10:46 | view
Martín Haro defends his PhD thesis on November 22nd
22 Nov 2013

Martín Haro defends his PhD thesis entitled "Statistical Distribution of Common Audio Features" on Friday November 22nd 2013 at 11:00h in room 55.309 of the Communication Campus of the UPF.

The jury of the defense is: Josep Lluis Arcos (IIIA-CSIC), Emilia Gómez (UPF) and Jean-Julien Aucouturier (IRCAM).

Abstract: In the last few years some Music Information Retrieval (MIR) researchers have spotted important drawbacks in applying standard successful-inmonophonic algorithms to polyphonic music classification and similarity assessment. Noticeably, these so called “Bag-of-Frames” (BoF) algorithms share a common set of assumptions. These assumptions are substantiated in the belief that the numerical descriptions extracted from short-time audio excerpts (or frames) are enough to capture relevant information for the task at hand, that these frame-based audio descriptors are time independent, and that descriptor frames are well described by Gaussian statistics. Thus, if we want to improve current BoF algorithms we could: i) improve current audio descriptors, ii) include temporal information within algorithms working with polyphonic music, and iii) study and characterize the real statistical properties of these frame-based audio descriptors. From a literature review, we have detected that many works focus on the first two improvements, but surprisingly, there is a lack of research in the third one. Therefore, in this thesis we analyze and characterize the statistical distribution of common audio descriptors of timbre, tonal and loudness information. Contrary to what is usually assumed, our work shows that the studied descriptors are heavy-tailed distributed and thus, they do not belong to a Gaussian universe. This new knowledge led us to propose new algorithms that show improvements over the BoF approach in current MIR tasks such as genre classification, instrument detection, and automatic tagging of music. Furthermore, we also address new MIR tasks such as measuring the temporal evolution of Western popular music. Finally, we highlight some promising paths for future audio-content MIR research that will inhabit a heavy-tailed universe.

 

15 Nov 2013 - 14:30 | view
Giant steps kick-off: creating the next generation of digital musical tools
21 Nov 2013 - 22 Nov 2013

GiantSteps (Seven League Boots for Music Creation and Performance) is a STREP project funded by the European Commission and coordinated by the MTG in collaboration with JCP Consult, which will last 36 months starting the 1st of November 2013.

The project aims to create the "seven-league boots" for music production in the next decade and beyond. Just as the boots of European folklore tradition enable huge leaps, GiantSteps proposes digital music tools for the near future, to empower all musical users, from professionals to casual musicians and even children. Bringing different types of musical knowledge in the form of musical expert agents – including harmonic, melodic, rhythmic and song structure agents – at different proficiency levels according to the specific needs of the user, the GiantSteps tools seek to stimulate the inspiration of all users, allowing them to create collaboratively, boosting mutual inspiration, helping anyone to learn and discover while creating, and facilitating professionals to work faster, while maintaining their creative flow.

In order to meet this ambitious goal the GiantSteps project has collected a strong balanced consortium representing the whole value chain and including leading agents coming from research institutions (MTG-UPF and Universität Linz JKU), industrial partners (Native Instruments and Reactable Systems) and music practitioners (STEIM and Red Bull Music Academy). With this consortium, and the project coordination by JCP-Consult, GiantSteps will be able to combine techniques and technologies in new and unprecedented ways. This includes the combination of state of the art interfaces and interface design techniques with advanced methods in Music Information Research that have yet to be applied in a real-time interaction  context  nor  with  creativity  objectives.  The  consortium’s  industry organizations will guarantee the alignment of these cutting edge technologies with existing market requirements, allowing for a smooth integration of research outcomes into real world systems. The MTG team, coordinated by Sergi Jordà and Perfecto Herrera, will bring its expertise on music information retrieval (MIR) and advanced interaction.

The GiantSteps kick-off meeting will take place at the MTG next 21st and 22nd of November.

14 Nov 2013 - 18:25 | view
intranet