News and Events

Tutorial on Beijing Opera and computational tools for its analysis
18 Nov 2014 - 20 Nov 2014

This is a 3 hours tutorial that we gave at ISMIR in Taipei and that now we are doing it again here:

Jingju music: concepts and computational tools for its analysis
Xavier Serra, Rafael Caro Repetto, Sankalp Gulati, Ajay Srinivasamurthy
Tuesday, Nov 18, & Thursday, Nov 20, 10:00am-12:00pm, Room 55.230

Abstract: Jingju (also known as Peking or Beijing opera) is one of the most representative genres of Chinese traditional music. From an MIR perspective, jingju music offers interesting research topics that challenge current MIR tools. The singing/acting characters in jingju are classified into predefined role-type categories with characteristic singing styles. Their singing is accompanied by a small instrumental ensemble, within which a high pitched fiddle, the jinghu, is the most prominent instrument within the characteristic heterophonic texture. The melodic conventions that form jingju modal systems, known as shengqiang, and the percussion patterns that signal important structural points in the performance offer interesting research questions. Also the overall rhythmic organization into pre-defined metrical patterns known as banshi makes tempo tracking and rhythmic analysis a challenging problem. Being Chinese a tonal language, the intelligibility of the text would require the expression of tonal categories in the melody, what offers an appealing scenario for the research of lyrics-melody relationship. The role of the performer as a core agent of the music creativity gives jingju music a notable space for improvisation. The lyrics and scores cannot be taken as authoritative sources, but as transcriptions of particular performances.

In this tutorial we will give an overview of Jingju music, of the relevant problems that can be studied from an MIR perspective and of the use of specific computational tools for its analysis. The tutorial will be organized in three parts. The first will be an introduction to Jingju from a musicological perspective, the second will cover diverse audio analysis tools of relevance to the study of Jingju (using http://essentia.upf.edu), and finally in the last part we will present and discuss specific examples of analyzing Jingju arias using those tools (work done in the context of http://compmusic.upf.edu).

Contents:
Tuesday, Nov 18, 10:00am-12:00pm, Room 55.230
1. Presentation (Xavier Serra)
2. Introduction to jingju music (Rafael Caro)
3. Computational framework (Xavier Serra)
4. Research problems (Xavier Serra, Rafael Caro)

Thursday, Nov 20, 10:00am-12:00pm, Room 55.230
5. Computational tools for melodic description of jingju music (Sankalp Gulati)
6. Computational tools for rhythm analysis of jingju music (Ajay Srinivasamurthy)
7. Conclusions (Xavier Serra)

13 Nov 2014 - 09:51 | view
Participation to the "Atles de la Innovacio a Catalunya"

Jordi Bonada participates in the public presentation of the "Atles de la Innovació a Catalunya" at Fàbrica Moritz on Novembre 13th 2014. This is an event organized by the Plataforma Coneixement, Territori i Innovació that showcases several collaborations between universities and the industry. In particular, he will introduce the collaboration between UPF and Yamaha Corp in the context singing voice synthesis.
 

13 Nov 2014 - 09:01 | view
Seminar by Fernando Villavicencio on voice transformation applied to singing voice
14 Nov 2014

Fernando Villavicencio, from National Institute of Informatics in Japan, gives a seminar titled "Some applications of Voice-Transformation to Singing-Voice" on Friday November 14th at 12:00h in room 52.s31.

Abstract:
Voice Conversion aims to the transformation of a “source” speaker in order to make it perceptually identifiable as a particular “target” one. Although the main work done in this field has been focused in spoken voice there are potential application for singing-voice synthesis (e.g. for singer’s database conversion on singing synthesis systems).

In this talk we will present previous work applying Voice-Conversion and related Voice-Transformation experimentation to Yamaha’s singing-voice synthesizer “VOCALOID”. The works include singer’s identity conversion by means of statistical timbre-features mapping (GMM) and accurate spectral envelope modelling. Also, aiming for large vocal-quality control from modal (normal) singing, experimentation for voice-quality transformation by source/filter estimation & transformation will be presented. The talk will include an introduction of the National Institute of Informatics and the Sound Media Group.

Biography:
Born in 1977 in Guadalajara, Mexico. Fernando Villavicencio received the Bachelor degree in Electronics and Communications Engineering by the Autonomous University of Guadalajara (1999) and Master degrees in Telecommunications (2002) and Signal Processing & Digital Design (2003) by the National Polytechnic Institute of Mexico and Rennes I University in France respectively. He received the Doctor degree at Pierre and Marie Curie University of Paris following his research work in High-Quality Voice Conversion at the Institute of Research and Acoustic-Music Coordination (IRCAM) under the supervision of Xavier Rodet and Axel Röbel.

F. Villavicencio joined the Music Technology Group of Pompeu Fabra University (2008-2009) to work on singing-voice transformation within a collaboration with Yamaha Corporation. From 2009 to September 2014 was member of the Speech Technology Group of Yamaha’s Sound Technology R&D center in Hamamatsu, Japan. He is currently post-doctoral fellow at the Sound Media Group of the National Institute of Informatics in Tokyo, Japan. His research interests include general topics on speech synthesis, voice transformation, and speaker verification.

11 Nov 2014 - 17:17 | view
Participation to III Jornadas AEOS - Fundación BBVA

Julio Carabias, Emilia Gómez and Alba B. Rosado participate to the 'III Jornadas AEOS - Fundacion BBVA' which will take place on Nov 12-13th 2014 in Madrid. Emília Gomez, its Principal Investigator, is presenting the PHENICX project, including a recently released prototype which integrates a number of technologies provided by partners and some demonstrators based in the repovizz repository for multimodal data.

Part of this conference will be devoted to take a close look at the digital strategies and tools that can help orchestras project themselves more successfully towards the rest of the world and achieve more active communication with their current and future audiences, both on site and on line.

11 Nov 2014 - 14:18 | view
Participation to ISMIR 2014

Julio Carabias, Rafael Caro Repetto, Emilia Gómez, Sankalp Gulati, Nadine Kroher, Agustín Martorell,  Marius Miron, Ajay Srinivasamurthy, and Xavier Serra participate to the 15th International Society for Music Information Retrieval Conference (ISMIR 2014) that takes place in Taipei (Taiwan) from October 27th to the 31st 2014. These are the papers that are being presented:

Rafael Caro Repetto, Ajay Srinivasamurthy, Sankalp Gulati and Xavier Serra also give a tutorial on "Jingju music: concepts and computational tools for its analysis", and there are a number of presentations in the late-breaking demo paper sessions.

 

22 Oct 2014 - 09:05 | view
Participation to ISWC 2014

Frederic Font and Sergio Oramas participate to the 13th International Semantic Web Conference, that takes place from October 19th to the 23rd 2014 in Trentino, Italy. They present the following paper:

22 Oct 2014 - 08:42 | view
Acte de presentació: Diccionari de les tecnologies del so i de la música
24 Oct 2014

El proper divendres, 24 d’octubre, a les 11 h, a la Sala de Graus de l’Edifici Tànger (55.309) del Campus de la Comunicació-Poblenou de la Universitat Pompeu Fabra, es presentarà el Diccionari de les tecnologies del so i de la música que aplega prop de 250 registres semàntics relacionats amb el llenguatge especialitzat d’aquest àmbit tecnològic. Intervindran en l’acte Enric Peig, Director de l'Escola Superior Politècnica (ESUP) de la UPF, Cristina Bofill, de l’Àrea d'Enginyeries i Tecnologia de TERMCAT i Emilia Gómez, Professora del DTIC-ESUP i membre del Grup de recerca en tecnología musical (MTG) de la UPF.

L'objectiu d'aquest acte de presentació és tractar temes relatius a la importància i problemàtica al voltant dels diccionaris tecnològics i específics en català, la política de multilingüisme de l'ESUP dins dels Graus en Enginyeria i el procés seguit per la creació del Diccionari de les tecnologies del so i de la música.

Aquest és un diccionari elaborat pel Grup de recerca en Tecnologia Musical del Departament de Tecnologies de la Informació i les Comunicacions i l'Escola Superior Politècnica de la Universitat Pompeu Fabra, i coordinat per Emilia Gómez i Felipe Luis Navarro. El diccionari té com objectiu recollir els termes específics acceptats i emprats dins del camp de tecnologies del so i de la música que necessiten conèixer estudiants i professionals de l’àmbit. A més d’incloure termes que designen conceptes generals, també comprèn d’altres més específics relacionats amb l'anàlisi i síntesi de so, acústica, psicoacústica, efectes digitals, dispositius i maquinari.

El diccionari conté 247 fitxes terminològiques amb denominacions en català, castellà i anglès i la corresponent definició en català. En algunes fitxes, al costat de la denominació que és un manlleu no adaptat s’indica entre claudàtors la llengua de procedència del terme per a advertir que la pronúncia habitual del mot no segueix les regles fonètiques del català, sinó les de la llengua assenyalada. Alguns d’aquests manlleus no adaptats estan pendents d’estudi per part del Consell Supervisor del TERMCAT

El projecte ha rebut el suport de diferents entitats involucrades en el desenvolupament i la integració de la terminologia catalana en els sectors especialitzats i en la societat en general. El TERMCAT i el Gabinet Lingüístic de la Universitat Pompeu Fabra han aportat assessorament metodològic i documental, i el desenvolupament del projecte ha estat cofinançat pel Centre per a la Qualitat i la Innovació Docent (CQUID) a través del Pla d'Ajuts de Suport a la Qualitat i a la Innovació Docent (plaQUID) 2012-2013. Addicionalment, també han participat en el projecte investigadors del Grup de Recerca en Tecnologia Musical i del Departament de Sonologia de la Escola Superior de Música de Catalunya.

15 Oct 2014 - 13:24 | view
Galata Electroacoustic Orchestra at La Biennale di Venezia 2014

Last year the MTG participated on the project Galata Electroacoustic Orchestra (GEO), European Erasmus Intensive Programme, and as a result of the project a big orchestra was created with members of 5 institutions from different countries.

This year the orchestra has successfully performed again at La Biennale di Venezia on October 5th.  The orchestra used traditional instruments and laptops, under de conduction of Roberto Doati and Tolga Tüzun.  The members of the orchestra representing the MTG were: Robert Clouth, Nadine Kroher, Rubén Martínez Orio, Felipe L. Navarro, Álvaro Sarasúa (PhD students, former SMC master and ESMUC students).

13 Oct 2014 - 16:08 | view
IRMAS: A Dataset for Instrument Recognition in Musical Audio Signals

We are glad to announce the release of a dataset for Instrument Recognition in Musical Audio Signals (IRMAS dataset).

This dataset was used in the evaluation of the article:
Bosch, J. J., Janer, J., Fuhrmann, F., & Herrera, P. “A Comparison of Sound Segregation Techniques for Predominant Instrument Recognition in Musical Audio Signals”, in Proc. ISMIR (pp. 559-564), 2012

IRMAS is intended to be used for training and testing automatic instrument recognition methods, in a varied set of professionally produced western music recordings. The dataset includes a total of 6705 excerpts for training, and 2874 excerpts for testing. The instruments considered are: cello, clarinet, flute, acoustic guitar, electric guitar, organ, piano, saxophone, trumpet, violin, and human singing voice.

Further information about the music collection, and how the samples were created and annotated is available on the dataset website, where you can also download the audio excerpts and metadata. Given the size of the collection (over 10Gb), you can also first download a sample of the testing and training data, to see if it fits your needs.

We hope that IRMAS will be useful to our scientific community, and we would be very interested in receiving your feedback.

1 Oct 2014 - 16:22 | view
New students in the SMC master program

In this new academic year 2014-2015, seventeen new students have joined the Master in Sound and Music Computing:

Iñigo Angulo Otegui (Spain), Yile Yang (China), Pritish Chandna (India), Jordi Pons (Spain), Francesc Capó Clar (Spain), Miquel Espósito Pérez (Spain), Swapnil Gupta (India), Adriana Suarez Iguaran (Colombia), Ignasi Adell Arteaga (Spain), Xavier Eduard Lizarraga Seijas (Spain), Serkan Ozer (Turkey), Sanjeel Parekh (India), Jaume Parera Bonmati (Spain), Lorenzo Porcaro (Italy), Carmen Yaiza Rancel Gil (Spain), Vincent Zurita Turk (Belgium), Pablo Novillo Villegas (Ecuador).

28 Sep 2014 - 16:00 | view
intranet