News and Events

Joint PhD fellowship available
1 Sep 2014 - 15 Sep 2014

Joint PhD fellowship on “Understanding the effect of evoked emotions in long-term memory” at the Department of Information Technologies and Communications DTIC-UPF

DESCRIPTION

Human visual perception emerges from complex information processing taking place in the brain. Nonetheless, since our perceptual experience arises with apparent ease, we are often unaware of such complexity. Vision is an active process in that detailed representations of our visual world are only built from actively scanning our eyes with a series of saccades and fixations. The process of actively scanning a visual scene while looking for something in a cluttered environment is known as visual search. The study of visual search processes by means of eye-tracking and EEG recordings not only offers a unique opportunity to gain fundamental insights into visual information processing in the human brain, but also opens new avenues to assess cognitive function and its relation to normal aging and age-related cognitive pathologies.

This successful applicant will study novel «cognitive signatures» derived from eye-tracking methods and EEG recordings, and will investigate the role of evoked-emotions in such signatures. The simultaneous acquisition of both eye-tracking and EEG recordings will allow the PhD candidate to investigate the effect of evoked emotions in long-term memory by linking brain activity with behavioral results. Throughout the project, music-evoked emotions will be considered. The opening is for a joint position from the Computational Neuroscience Group and the Music Technology Group at DTIC-UPF.

The PhD project will be closely related and supported by the funded research projects: TIN2013-40630-R, ComputVis@Cogn- Visual Search as a Hallmark of Cognitive Function: An Interdisciplinary Computational Approach.

Starting date: Fall 2014
Duration: 3 years

REQUIREMENTS

Students with a strong background in mathematics, computer science, or physical sciences are particularly encouraged to apply. The applicants must hold an MSc degree in Computer Science, Physics, Applied Math, Cognitive Science, Psychology or related discipline. Proficiency in both written and spoken English is required.

HOW TO APPLY?

Interested people should send a resume as well as an introduction letter to laura [dot] dempere [at] upf [dot] edu and rafael [dot] ramirez [at] upf [dot] edu

Deadline for applications is September 15th 2014. Interested candidates are welcome to contact laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (laura[dot]dempere[at]upf)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (edu) and rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (rafael)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (ramirez[at]upf)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (edu) for further details.

 

1 Sep 2014 - 09:28 | view
Research/development position at MTG-UPF
This position will involve working with a team at MTG-UPF in Barcelona to develop audio signal processing applications related to the analysis and characterization of instrumental sounds.
 
Starting date: mid september 2014
Duration: 12 months with option to renew
 
Required skills/qualifications:

MSc degree in Computer Science, Electrical Engineering or similar educational qualification
Experience in audio signal processing, machine learning and scientific programming (Python/C++)
Proficiency in both written and spoken English
Preferred skills/experience:

Experience using Essentia and Freesound.org
Music education and experience in playing a musical instrument
Familiarity with web technologies

 
ABOUT MTG-UPF:

The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines. For more information on MTG-UPF please visit http://mtg.upf.edu



HOW TO APPLY?

Interested people should send a resume as well as an introduction letter to mtg [at] upf [dot] edu (subject: Research%2Fdevelopment%20position)
28 Jul 2014 - 14:51 | view
PhD fellowship on “Audio-Visual Approaches for Music Content Description”

The Music Technology Group and the Image Processing Group of the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona are opening a joint PhD fellowship in the topic of “Audio-Visual Approaches for Music Content Description” to start in the Fall of 2014.

Topic:

Music is a highly multimodal concept, where various types of heterogeneous information are associated to a music piece (audio,musician’s gestures and facial expression, lyrics, etc.). This has recently led researchers to apprehend music through its various facets, giving rise to multimodal music analysis studies.

The goal of this fellowship is to research on the complementarity of audio and image description technologies to improve the accuracy and meaningfulness of state of the art music description methods. These methods are the core of content-based music information retrieval tasks. Several standard tasks could benefit from it: Structural analysis and segmentation, Discovery of Repeated Themes & Sections, Music similarity computation and music retrieval, Genre / style classification, Artist identification and Emotion (Mood) Characterization.

This PhD will be linked to ongoing funded research projects at the MTG and GPI, such as PHENICX (Performances as Highly Enriched aNd Interactive Concert eXperiences), 'Inpainting Tools for Video Post-production. Variational theory and fast algorithms', SIGMUS (SIGnal analysis for the discovery of traditional MUSic repertoire) and MTM2012-30772.

Requirements:

Applicants should have experience in audio and image signal processing, and hold a MSc in a related field (e.g.
telecommunications, electrical engineering, physics, mathematics, or computer science). Experience in scientific programming (Matlab/Python/C++) and excellent English are essential. Musical background and expertise on multimedia information retrieval will be considered.

The grant involves teaching assistance, so interest for teaching is also valued.

Application:

Interested candidates should send a CV and motivation letter to Prof. Emilia Gómez (emilia [dot] gomez [at] upf [dot] edu) and Prof. Gloria Haro (gloria [dot] haro [at] upf [dot] edu) and include in the subject [PhD Audio-Visual].
They will also have to apply to the PhD program of the DTIC of the UPF

Application deadline: September 1st 2014
Starting date: starting on October 15th 2014

More information:

http://www.upf.edu/dtic_doctorate/
http://www.upf.edu/dtic_doctorate/phd_fellowships.html
http://mtg.upf.edu
http://gpi.upf.edu

28 Jul 2014 - 09:29 | view
Participation to NIME'2014

The 14th International Conference on New Interfaces for Musical Expression (NIME), took place at Goldsmiths University of London between June 30th and July 3rd. The MTG took part in it presenting three papers:

7 Jul 2014 - 09:06 | view
Presentations of PhD proposals
30 Jun 2014

On June 30th 2014 we have the defences of the thesis proposals of 5 first-year PhD students of the MTG. The presentations are open to everyone.

10:15h - Nadine Kroher (Supervisor: Emilia Gomez). Title of proposal: "Computational Transcription, Description and Analysis of the Flamenco Singing Voice". Room 55.410 (Tanger building)

11:00h - Marius Miron (Supervisor: Emilia Gomez). Title of proposal: "Source Separation and Signal Modeling of Orchestral Music Mixtures". Room 55.410 (Tanger building)

11:45h - Georgi Dhambazov (Supervisor: Xavier Serra). Title of proposal: "Analysis of timbral and phonetic characteristics of singing voice in the Turkish art music tradition". Room 55.410 (Tanger building)

12:30h - Sergio Oramas (Supervisor: Xavier Serra). Title of proposal: "Harvesting, Structuring and Exploiting Social Data in Music Information Retrieval". Room 55.410 (Tanger building)

16:00h - Rafael Caro Repetto (Supervisor: Xavier Serra).  Title of proposal: "Understanding Xipi and Erhuang. Analysis of the musical dimension of Jingu Arias". room 20.287 (Ciutadella campus)

20 Jun 2014 - 16:22 | view
Participation to FMA 2014
Nadine Kroher, Georgi Dzhambazov, Sertan Şentürk and Xavier Serra participate to the 4th International Workshop on Folk Music Analysis that takes place in Istanbul, Turkey, on June 12th and 13th, 2014. 
 
They are presenting the following work done at the MTG:  
10 Jun 2014 - 17:37 | view
SMC master thesis defense
1 Jul 2014 - 4 Jul 2014

What? SMC master students 2013/2014 defend their final projects on July 1st-4th

Where? Room 55.309 Tanger building

Detailed schedule:

Student name   Title   Supervisor   Date   Hour 
Aram Estiu Graugés   Animal vocalization analysis/synthesis   Jordi Janer and Jordi Bonada   July 1st   9:30  
S.I. Mimilakis   Voice quality modelling with the Wide-Band Harmonic Sinusoidal Modeling Algorithm   Jordi Bonada   July 1st   10:00  
Roger Rios Rubiras   A comparative study of speech dereverberation algorithms on music signals for interactive remixing applications   Stanislaw Gorlow and Jordi Janer   July 1st   10:30  
Oriol Romaní Picas   Score alignment in recordings from large ensembles   Julio J. Carabías-Orti and Jordi Janer   July 1st   11:15  
Charalambos Christopoulos   Augmented music performance by gestural recognition in 3D space using Polhemus sensors   Alfonso Perez   July 1st   11:45  
Giacomo Herrero Coli   Supervised music structure segmentation/annotation   Joan Serrà   July 2nd   9:30  
Toros Ufuk Senan   Preservation and study of ancient wood musical instruments stored in museums and conservatories.   Enric Guaus and Paul Poletti   July 2nd   10:00  
Nuno Hespanhol   Automatic Classification of Musical Sounds   Xavier Serra and Frederic Font   July 2nd   10:30  
Constantinos A. Dimitriou   Similarity Measures for Audio Classes   Xavier Serra and Frederic Font   July 2nd   11:15  
Vignesh Ishwar   Prominent pitch analysis for the study of vocal melodies in music   Xavier Serra   July 2nd   11:45  
Andrés Pérez López   Real time tools for 3d audio spatialization   Daniel Arteaga   July 3rd   9:30  
Nicholas Harley   Evaluation of Pitch-Class Set Similarity Measures for Tonal Analysis   Agustín Martorell   July 3rd   10:00  
Belén Nieto Núñez   Melody Extraction: addressing user satisfaction and context-awareness   Emilia Gómez   July 3rd   10:30  
Jorge A. Cuarón   Perceptual Validation of Chord Estimation Evaluation Standards   Agustín Martorell   July 3rd   11:15  
Jaime Parra Damborenea   ReactBlocks: A 3D Tangible Interface for Music Learning   Sergi Jordà and Cárthach Ó Nuanáin   July 4th   9:30  
Hazar Emre Tez   Symbolic Modular 2D GUIs with Physical Properties   Sergi Jordà and Cárthach Ó Nuanáin   July 4th   10:00  
Daniel Gómez Marín   Smart Percussive spaces   Sergi Jorda   July 4th   10:30  
Marcel Schmidt   A Musical Interface for People with Motor Disabilities   Zacharias Vamvakousis   July 4th   11:15  
Francisco Rodríguez Algarra   Audio-based computational stylometry for electronic music   Perfecto Herrera   July 4th   11:45  
Erim Yurci   Emotion detection from EEG signals: correlating cerebral cortex activity with music evoked emotion   Rafael Ramirez   July 4th   12:15  
Urbez Caplabo Harmony high level features Perfecto Herrera and Sergi Jordà  July 3rd  13:00

 

10 Jun 2014 - 17:03 | view
Xavier Serra receives the ICREA Academia price for the second time

Xavier Serra has received the ICREA Academia price, given by the Catalan Institution for Research and Advanced Studies, for the second time.

The ICREA Academia program recognizes the research excellence and leadership with the goal to motivate and retain university faculty members from the catalan public universities. ICREA is an institution of the catalan government that has as a fundamental objective to hire researchers from around the world through a selection process based on scientific talent.

4 Jun 2014 - 13:22 | view
Concert and lecture demonstration by the Gundecha Brothers
19 Jun 2014

On June 19th 2014 and in the context of CompMusic we are organizing a lecture demonstration and a concert of the Gundecha Brothers, which are among the most prominent singers of the Dhrupad music tradition of India.

Venue: Auditorium of the Conservatori Municipal de Música de Barcelona, Bruc 112, Barcelona

17.00h–18.45h. Lecture demonstration by the Gundecha Brothers on “Making of Voice & Raga in Indian Music” 

19.30h-20.00h. Presentation of the CompMusic project by Xavier Serra  

20.00h–21.45h. Dhrupad concert by the Gundecha Brothers

30 May 2014 - 13:31 | view
Seminar on repovizz at McGill

A seminar by Esteban Maestre, Marie Curie Fellow, McGill University + Universitat Pompeu Fabra;
With assistance by Quim Limona, Universitat Pompeu Fabra

DATE: Tuesday June 10, 2014 at 12:30pm
LOCATION: Room A832, New Music Building, 527 Sherbrooke Street West (CIRMMT)

ABSTRACTrepovizz is an integrated online system capable of structural formatting and remote storage, browsing, exchange, annotation, and visualization of synchronous multi-modal, time-aligned data. Motivated by a growing need for data-driven collaborative research, repovizz aims to resolve commonly encountered difficulties in sharing or browsing large collections of multi-modal datasets. At its current state, repovizz is designed to hold time-aligned streams of heterogeneous data: audio, video, motion capture, physiological signals, extracted descriptors, annotations, et cetera. Most popular formats for audio and video are supported, while CSV formats are adopted for streams other than audio or video (e.g. motion capture or physiological signals). The data itself is structured via customized XML files, allowing the user to (re-) organize multi-modal data in any hierarchical manner. Datasets are stored in an online database, allowing the user to interact with the data remotely through a powerful HTML5 visual interface accessible from any current web browser; this feature can be considered a key aspect of repovizz since data can be explored, annotated, or visualized from any location.

repovizz has been developed by the Music Technology Group of Universitat Pompeu Fabra in the context of large-scale research projects over the past few years, and now it is close to launching as beta. In this seminar we'll give an overview of the main capabilities of repovizz and its current state of development, followed by a short tutorial.

27 May 2014 - 09:39 | view
intranet