News and Events

Seminar by Cédric Mesnage on Social Shuffle

Cédric Mesnage, from the University of Lugano, will give a research seminar on "Social Shuffle" on Friday February 10th at 13:00h in room 55.410.

Abstract: In this talk I present my understanding on Music Discovery. First the work carried out during my PhD studies in web engineering and social media, particularly web based experiments on the concepts of tag navigation and social diffusion for Music Discovery using data, Facebook and Youtube. Second I list problems I see with the discovery of world music from an inter-cultural perspective. Third I give potential directions and outcomes for future work.

6 Feb 2012 - 11:14 | view
Open positions in CompMusic

The Music Technology Group of the Universitat Pompeu Fabra, Barcelona, has open research positions at the PhD and post-doc levels to work on the CompMusic project. Interested applicants should send a CV and a letter of motivation, expressing their research interests in relation to the CompMusic project, to Xavier Serra.

We are specially interested in candidates with an engineering background in areas such as semantic analysis, machine learning or signal processing and knowledge in one of the art-music traditions that are the focus of the project, more specially in the Arab (Andalusi) or the Chinese (Han) ones.

25 Jan 2012 - 14:17 | view
Lectures by Markus Schedl on context-based MiR

Markus Schedl, from the Department of Computational Perception of Johannes Kepler University in Linz (Austria) will give a series of lectures on context-based Music Information Retrieval.

* 23.02: 15:30-16:30 (room 52.321) DTIC Research seminar on "Geo-Aware Music Information Extraction from Social Media"

Abstract: The abundance of data present in Social Media opens an unprecedented source for information about every topic of our daily lives. Since music plays a vital role in many persons' lives, information about music items is found in large amounts in data sources such as social networks or microblogs. In this talk, I will report on latest findings in Social Media Mining to extract meaningful musical information from microblogs. Specifically, I will address the topics of similarity measurement, popularity estimation, and cultural-aware music taste and trend detection. In addition to elaborating on the methodological background, I will present some application scenarios and demonstrator systems that strive to illustrate some application domains of this interesting research field.

* 28.02-01.03 (room 52.S27) Lectures on: "Context-based Music Information Retrieval"

These lectures give an introduction to Music Information Retrieval (MIR), with a focus on context-based methods. MIR is concerned with the extraction, processing, and use of various music-related information from various musical data sources (scores, digital audio, live concerts, collaborative tags, video clips, album covers, etc.). I will focus on feature extraction (context- and Web-based), similarity measurement, and applications of MIR. I will also strive to include my latest research on Social Media Mining for MIR.

  • 28.02.: 12:00-14:00 Introduction to MIR

What is MIR? - definitions, important key aspects, subfields and typical tasks, basic scheme of an MIR system, basics in different retrieval approaches, feature extraction (audio and contextual), and similarity measurement

  • 29.02.: 12:00-14:00 Context-based Feature Extraction

motivation, data sources for contextual features, specific biases and problems of contextual features, term vector-based (web terms, tags, lyrics) and co-occurrence-based (play lists, page counts, P2P networks) approaches

  • 01.03.: 10:00-12:00 Similarity Measurement and Applications

similarity measurement on different kinds of music-related data (from scalar to multi-instance, multi-dimensional data), selected applications developed by the Department of Computational Perception / Johannes Kepler University, Linz, Austria (for instance, user interfaces to music)

19 Jan 2012 - 17:39 | view
CompMusic Workshop at KIIT-Gurgaon

The CompMusic project is organizing this workshop as a satellite event of the International Symposium on Frontiers of Research on Speech, Music and Allied Signal Processing (FRSM 2012) with the aim to give an overview of the Music Information Research of relevance to Hindustani and Carnatic music.

Date: January 20th, 2012, from 9:30am to 5:30pm
Venue: College of Engineering, Kamrah International Institute of Technology (KIIT), KIIT Campus, Sohna Road, Near Bhondsi, Gurgaon, Haryana, India

  • 09:30am: CompMusic: Current research and initial results (Xavier Serra)
  • 10:00am: Hindustani Music: A case for computational modeling (Preeti Rao)
  • 10:30am: Carnatic Music: A signal processing perspective (Hema Murthy)
  • 11:00am: tea
  • 11:30am: Carnatic Music: A musicological perspective (T. M. Krishna)
  • 12:30pm: Hindustani Music: A musicians perspective (Pt. Buddhadev Dasgupta)
  • 01:30pm: lunch
  • 02:30pm: Distribution based computational analysis of Makam Music (Bariş Bozkurt)
  • 03:30pm: Machine learning for music discovery (Joan Serrà)
  • 04:00pm: tea
  • 04:30pm: Panel discussion (moderator: Xavier Serra; panelists: Preeti Rao, Hema Murthy, Bariş Bozkurt, T. M. Krishna, Pt. Buddhadev Dasgupta, Mallika Banerjee)
6 Jan 2012 - 05:58 | view
MTG creates Voctro Labs, a new spin-off company

Voctro Labs starts operating today, which is the third spin-off of the MTG. This business initiative is led by Jordi Bonada, Jordi Janer, Merlijn Blaauw and Oscar Mayor and has its headquarters in the new UPF spaces for incubation and entrepreneurship, the Almogàvers Business Factory.

Voctro Labs is founded with the goal of becoming the world leader in the market of voice processing technologies and targets the entertainment industry (film, music and video games), offering advanced technologies for audio processing.

As a result of an agreement with Yamaha Corp. (Japan), Voctro Labs has developed the world first Spanish virtual singers for the software Vocaloid 3, composed of a male voice (Bruno) and female (Clara). These voices are marketed through an online distribution platform and will be available for this Christmas campaign. With the new Spanish voices developed, Voctro Labs aims at spreading the Vocaloid phenomenon among all Spanish speaking territories worldwide.

Find out more information about it in this Press Release

22 Dec 2011 - 10:44 | view
Seminar by Yi-Hsuan Yang on perceived emotion of music

Title: Dimensional Music Emotion Recognition
Date: Tuesday, Dec 13, 2011, 12:30pm
Location: room 52.S31, Roc Boronat building

Abstract: Automatic recognition of the perceived emotion of music allows users to retrieve and organize their music collections in a fashion that is content-centric and intuitive. A typical approach to music emotion recognition categorizes emotions into a number of classes and applies machine learning techniques to train a classifier. This approach, however, faces a granularity issue that the number of emotion classes is too small in comparison with the richness of emotion perceived by humans. In this talk, I would introduce some research that takes a very different perspective and views emotions as points in a 2-D space spanned by two latent dimensions: valence (how positive or negative) and arousal (how exciting or calming). In this approach, MER becomes the prediction of the valence and arousal values of a song corresponding to a point in the emotion plane. This way, the granularity and ambiguity issues associated with emotion classes no longer exist since no categorical class is needed. Moreover, because the 2D plane provides a simple means for user interface, new emotion-based music organization, browsing, and retrieval can be easily created for mobile devices that have small display area.

Biography: Yi-Hsuan Yang received the Ph.D. degree in Communication Engineering from National Taiwan University, Taiwan, in 2010. Since September 2011, he has been with the Academia Sinica Research Center for Information Technology Innovation, where he is an Assistant Research Fellow. His research interests include music information retrieval, multimedia signal processing, machine learning, and affective computing. He was awarded the Microsoft Research Asia Fellowship in 2008 and the MediaTek Fellowship in 2009. He is the author of the book Music Emotion Recognition, published by CRC Press in 2011.

12 Dec 2011 - 18:47 | view
Phonos Concert: Quartet Sax-Sons
On Friday December 16th 2011 at 19:30h in the Espai Polivalent of the Communication Campus of the UPF, Phonos is organizing a concert of saxos and electroacoustic music
9 Dec 2011 - 12:30 | view
Participation to DAFx'11

Panos Papiotis, Sašo Musevic and Justin Salamon participate at the 14th International Conference on Digital Audio Effects that takes place in Paris, France on September 19-23, 2011. MTGs participation includes 3 poster presentation.


18 Nov 2011 - 15:37 | view
Phonos: Octophonic concert
On Tuesday November 22nd 2011 at 19:30h in the Espai Polivalent of the Communication Campus of the UPF, Phonos is organizing an Octophonic concert of electroacoustic music in the frame of the AMEE Barcelona meeting point 2011.
17 Nov 2011 - 18:47 | view
Presentation of Teclepatía at Interface Culture Lab (Linz)

Speakers: Sebastian Mealla and Dr. Aleksander Valjämäe (BCI Lab, TU Graz)
Title: Multimodal Display of Brain and Body Signals in Collaborative Experiences Using a Tabletop Interface and Physiology-driven Tangible Objects
Date: Tuesday, Nov 22nd, 2011
Location: Interface Culture Lab (Universität für Künstlerische und Industrielle Gestaltung in LINZ)

Abstract: Physiological Computing has been applied in different disciplines such as Human-Computer Interaction, neuroscience and medical rehabilitation, and is becoming widespread due to device miniaturization and improvements in real-time processing. However, most of the current physiology-based technology focuses on single-user paradigms and traditional Graphical User Interfaces. Therefore, Its application in collaborative scenarios is still emerging. Our  work explores how sonification and visualization of human brain and body signals, and its presentation through tangible objects (physiopucks), can enhance user experience in collaborative, multiuser tasks. We present a multimodal interactive system built using a musical tabletop interface (Reactable) and an electro-physiology sensing system measuring Electroencephalogram (EEG) and heart rate (Enobio headset, Starlab) that allows performers to generate and control sounds using their own or their fellow team member’s physiology, and visualize all ongoing processes in an interactive surface.

Bio: Aleksander Väljamäe has received his PhD in applied acoustics at Chalmers University of Technology, Gothenburg, Sweden, in 2007. During his PhD studies concerning multisensory perception he has being a visiting researcher at University of Barcelona (Dr. Soto-Faraco) and NTT Communication Science Labs, Japan (Dr. Kitagawa). He has being active in a number of EU funded projects: POEMS, PRESENCCIA, BrainAble, Future BNCI. In 2007-2010 he has being a postdoc and a psychophysiology lab director at Laboratory for Synthetic Perceptive, Emotive and Cognitive Systems (SPECS), Universitat Pompeu Fabra, Barcelona, Spain, obtaining several grants as PI from national Spanish funding (TEC2009-13780, TEC2010-11599-E). Currently he is a senior postdoctoral researcher at BCI Lab, Technical University of Graz, Austria. His psychophysiology research concerns how audiovisual media influence humans on perceptual and cognitive level, with particular stress on the novel methods for diagnosis and treatment of various brain disorders (e.g. autism, depression, chronic pain, migraine) and new applications (BCI, neurocinema).

16 Nov 2011 - 17:51 | view