News and Events

New students in the SMC master program

In this new academic year 2014-2015, eighteen new students have joined the Master in Sound and Music Computing:

Iñigo Angulo Otegui (Spain), Yile Yang (China), Pritish Chandna (India), Shane McGrath (Ireland), Jordi Pons (Spain), Francesc Capó Clar (Spain), Miquel Espósito Pérez (Spain), Swapnil Gupta (India), Adriana Suarez Iguaran (Colombia), Ignasi Adell Arteaga (Spain), Xavier Eduard Lizarraga Seijas (Spain), Serkan Ozer (Turkey), Sanjeel Parekh (India), Jaume Parera Bonmati (Spain), Lorenzo Porcaro (Italy), Carmen Yaiza Rancel Gil (Spain), Vincent Zurita Turk (Belgium), Pablo Novillo Villegas (Ecuador).

28 Sep 2014 - 17:00 | view
Participation to EuroMAC 2014
Agustín Martorell participates in the 8th European Music Analysis Conference (EuroMAC) that takes place on September 17th-20th, 2014, in Leuven (Belgium). EuroMAC is devoted to music analysis in the academic (musicological) tradition. The MTG's contribution is encompassed in a two-day special session on Computational Music Analysis.

The abstract presented (both as oral and poster) is:

A. Martorell. "Systematic Set-Class Surface Analysis: a Hierarchical Multi-Scale Approach".

An extended book chapter on this topic, co-authored by A. Martorell and E. Gómez, will appear in a forthcoming book on Computational Music Analysis, to be published by Springer in 2015.

10 Sep 2014 - 11:23 | view
Participation to ICMC-SMC 2014

Nadine Kroher, Ajay Srinivasamurthy, Sankalp Gulati, Rafael Ramírez, Esteban Maestre, Martí Umbert, Zacharias Vamvakousis, and Xavier Serra participate to the joint ICMC-SMC 2014 conference that takes place in Athens, Greece, from 14 to 20 September 2014. This is a joint conference between the 40th International Computer Music Conference (ICMC) and the 11th Sound & Music Computing conference (SMC). A part from the participation in various workshops and round tables the papers that are being presented are:

10 Sep 2014 - 10:10 | view
Participation to DLFM 2014
Mohamed Sordo and Alastair Porter participate in the 1st International Digital Libraries for Musicology workshop (DLfM 2104) that takes place on September 12th, 2014, in London (UK) in conjunction with the ACM/IEEE Digital Libraries conference 2014. These are the papers presented in which the MTG partcipates and that are all in the context of CompMusic:
8 Sep 2014 - 10:03 | view
MOOC on Audio Signal Processing for Music Applications

In collaboration with Prof. Julius Smith from Stanford University, Xavier Serra has put together a 10 weeks long course on Audio Signal Processing for Music Applications in the Coursera online platform. The course will start on October 1st and the landing page is https://www.coursera.org/course/audio.

The course focuses on the spectral processing techniques of relevance for the description and transformation of sounds, developing the basic theoretical and practical knowledge with which to analyze, synthesize, transform and describe audio signals in the context of music applications.

The course is based on open software and content. The demonstrations and programming exercises are done using Python under Ubuntu, and the references and materials for the course come from open online repositories. The software and materials developed for the course are also distributed with open licenses.

5 Sep 2014 - 18:53 | view
Performance on flamenco, mathematics and technology
Nadine Kroher, researcher from the COFLA and SIGMUS project within the Sound and Music Description area of the MTG, is performing in an event on September 26th 2014 devoted to flamenco music and technology organized in Seville, together with other researchers from the COFLA project.

A "cantaor" singing, analyzed in real time to visualize acoustic aspects related to his particular style and to automatically detect the flamenco style and variant.

Program in Spanish: Flamenco, Matemáticas y Tecnología musical(US). Se interpretan varios estilos de cante flamenco y se analizan desde el punto de vista matemático-computacional. Se realiza en vivo una muestra del funcionamiento de los programas informáticos en desarrollo, capaces de reconocer los cantes interpretados. Al final, se realiza una actuación flamenca a modo de resolución festera.

Responsable científico: José Miguel Díaz Báñez.
Grupo de investigación Cofla: Computational analysis of the FLAmenco music de la Universidad de Sevilla.
Lugar: Salón de actos (Sala Chicarreros) de la Fundación CajaSol. Plaza de San Francisco.
Entrada libre hasta completar aforo

5 Sep 2014 - 14:01 | view
Journal article published in Frontiers in Cognitive Science

Our open access journal article on string quartet interdependence for the Performance Science topic of Frontiers in Cognitive Science is available online! The article proposes and evaluates a computational methodology for quantifying the amount of interdependence among the members of a string quartet, in terms of four distinct dimensions of the performance (Intonation, Dynamics, Timbre and Tempo).

Papiotis P., Marchini M., Perez-Carrillo A. and Maestre E. (2014) Measuring ensemble interdependence in a string quartet through analysis of multidimensional performance data. Front. Psychol. 5:963. doi: 10.3389/fpsyg.2014.00963

Abstract: In a musical ensemble such as a string quartet, the musicians interact and influence each other's actions in several aspects of the performance simultaneously in order to achieve a common aesthetic goal. In this article, we present and evaluate a computational approach for measuring the degree to which these interactions exist in a given performance. We recorded a number of string quartet exercises under two experimental conditions (solo and ensemble), acquiring both audio and bowing motion data. Numerical features in the form of time series were extracted from the data as performance descriptors representative of four distinct dimensions of the performance: Intonation, Dynamics, Timbre, and Tempo. Four different interdependence estimation methods (two linear and two nonlinear) were applied to the extracted features in order to assess the overall level of interdependence between the four musicians. The obtained results suggest that it is possible to correctly discriminate between the two experimental conditions by quantifying interdependence between the musicians in each of the studied performance dimensions; the nonlinear methods appear to perform best for most of the numerical features tested. Moreover, by using the solo recordings as a reference to which the ensemble recordings are contrasted, it is feasible to compare the amount of interdependence that is established between the musicians in a given performance dimension across all exercises, and relate the results to the underlying goal of the exercise. We discuss our findings in the context of ensemble performance research, the current limitations of our approach, and the ways in which it can be expanded and consolidated.

2 Sep 2014 - 18:10 | view
Seminar by Mark Sandler on Semantic Audio
8 Sep 2014

Mark Sandler, from the Centre for Digital Music of Queen Mary University of London, gives a seminar on monday September 8th 2104 at 15:00h in room 55.309 on "Semantic Audio: combining semantic web technology with audio analysis".

Abstract: The seminar will present some of the latest research from the Centre for Digital Music in Semantic Audio, where appropriate by means of demos. These will include the use of semantic linked data to create music browsing applications, the use of content analysis in recording studios to improve the quality of audio features and music informatics applications, and music recommendation based on mood. It will end with a few ideas on Computational Audio - where computer science meets audio processing.

Bio: Professor Mark Sandler has been applying Digital Signal Processing to problems in audio and music since the late 1970s, and is one of the pioneers of the area known as Music Informatics. He currently specialises in the use of Semantic Technologies for Audio and Music. He has published over 400 papers and graduated over 30 PhD students. He was the Principal Investigator of the pioneering UK-funded OMRAS2 project (omras2.org) and the local PI on SIMAC, which was led from UPF. He recently completed a collaborative grant with BBC and I Like Music in the area of music and emotion, named Making Musical Mood Metadata (http://www.bbc.co.uk/rd/projects/making-musical-mood-metadata) which explored the use of mood in music recommendation systems, and has just started a 5 year grant, Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption. He is currently Chief Scientist of the Centre for Digital Music.

 

 

2 Sep 2014 - 13:53 | view
Joint PhD fellowship available
1 Sep 2014 - 15 Sep 2014

Joint PhD fellowship on “Understanding the effect of evoked emotions in long-term memory” at the Department of Information Technologies and Communications DTIC-UPF

DESCRIPTION

Human visual perception emerges from complex information processing taking place in the brain. Nonetheless, since our perceptual experience arises with apparent ease, we are often unaware of such complexity. Vision is an active process in that detailed representations of our visual world are only built from actively scanning our eyes with a series of saccades and fixations. The process of actively scanning a visual scene while looking for something in a cluttered environment is known as visual search. The study of visual search processes by means of eye-tracking and EEG recordings not only offers a unique opportunity to gain fundamental insights into visual information processing in the human brain, but also opens new avenues to assess cognitive function and its relation to normal aging and age-related cognitive pathologies.

This successful applicant will study novel «cognitive signatures» derived from eye-tracking methods and EEG recordings, and will investigate the role of evoked-emotions in such signatures. The simultaneous acquisition of both eye-tracking and EEG recordings will allow the PhD candidate to investigate the effect of evoked emotions in long-term memory by linking brain activity with behavioral results. Throughout the project, music-evoked emotions will be considered. The opening is for a joint position from the Computational Neuroscience Group and the Music Technology Group at DTIC-UPF.

The PhD project will be closely related and supported by the funded research projects: TIN2013-40630-R, ComputVis@Cogn- Visual Search as a Hallmark of Cognitive Function: An Interdisciplinary Computational Approach.

Starting date: Fall 2014
Duration: 3 years

REQUIREMENTS

Students with a strong background in mathematics, computer science, or physical sciences are particularly encouraged to apply. The applicants must hold an MSc degree in Computer Science, Physics, Applied Math, Cognitive Science, Psychology or related discipline. Proficiency in both written and spoken English is required.

HOW TO APPLY?

Interested people should send a resume as well as an introduction letter to laura [dot] dempere [at] upf [dot] edu and rafael [dot] ramirez [at] upf [dot] edu

Deadline for applications is September 15th 2014. Interested candidates are welcome to contact laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (laura[dot]dempere[at]upf)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (edu) and rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (rafael)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (ramirez[at]upf)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (edu) for further details.

 

1 Sep 2014 - 09:28 | view
Research/development position at MTG-UPF
This position will involve working with a team at MTG-UPF in Barcelona to develop audio signal processing applications related to the analysis and characterization of instrumental sounds.
 
Starting date: mid september 2014
Duration: 12 months with option to renew
 
Required skills/qualifications:

MSc degree in Computer Science, Electrical Engineering or similar educational qualification
Experience in audio signal processing, machine learning and scientific programming (Python/C++)
Proficiency in both written and spoken English
Preferred skills/experience:

Experience using Essentia and Freesound.org
Music education and experience in playing a musical instrument
Familiarity with web technologies

 
ABOUT MTG-UPF:

The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines. For more information on MTG-UPF please visit http://mtg.upf.edu



HOW TO APPLY?

Interested people should send a resume as well as an introduction letter to mtg [at] upf [dot] edu (subject: Research%2Fdevelopment%20position)
28 Jul 2014 - 14:51 | view
intranet