News and Events

Freesound: 8th anniversary

Today is the 8th anniversary of Freesound! Congratulations to the researchers, developers, moderators, donors and users who have been and are part of this project for the success achieved so far. Thanks to the collaboration of this great community we have now more than 160.000 free sounds from all over the world.

Freesound was started in 2005 in our research group. One of the first aims behind it was to create an open repository of sounds to be used for scientific and artistic research, but quickly became a very popular site used by a wide variety of people to share sounds and the experiences around them. Users upload sounds recorded or created by themselves and share them under Creative Commons licenses. The sounds in Freesound have good quality, are free and legal, so professionals from different fields (cinema, music, videogames, software...) as well as amateur users, use them in their works.

Freesound has now 3.5 million registered users around the world, about 40.000 unique visits a day, and the total number of unique visits in those eight years has been more than 57 million, coming from almost all the countries of the world.

There are many successful usages of the sounds in Freesound, for example in the movie Children of Men and in a song of the internationally known band The Prodigy. Freesound has received several awards, such as a BMW award, a Barcelona City award and a Google research award twice. But what is most important, during all those years users have shared really amazing sounds. Find the opinion of the community about the coolest sounds on Freesound: http://www.freesound.org/forum/freesound-project/33568/

The interest for sharing sounds keeps growing and the community of active users of Freesound is also growing. For the MTG, Freesound offers an exceptional framework with which to carry research in the context of semantic web technologies. But we are specially proud of being able to offer a very useful service to the society.

5 Apr 2013 - 09:26 | view
TONAS: a new dataset of flamenco a cappella sung melodies with corresponding manual transcriptions

As a little Friday gift, we're glad to announce the release of a new dataset of flamenco singing: TONAS

The dataset includes 72 sung excerpts representative of three a cappella flamenco singing styles, i.e. Tonás (Debla and two variants of Martinete), together with manually corrected fundamental frequency and note transcriptions.

This collection was built by the COFLA team in the context of our research project for melodic transcription, similarity and style classification in flamenco music.


Further information about the music collection, how the samples were transcribed and by who, is available on the dataset website, where you can of course download the audio, metadata and transcription files.


We hope that you find this collection useful, whether for automatic transcription of the singing voice or any other research topic (e.g. pitch estimation, onset detection, melodic similarity, singer identification, style classification), and we hope this dataset will increase the interest of our scientific community on the particular challenges of flamenco singing.


We would be very interested to receive your feedback.

Best regards,

The COFLA team

15 Mar 2013 - 12:11 | view
Barcelona Music Hack Day 2013 - Neuroscience and Music (Special Track)
13 Jun 2013 - 14 Jun 2013

Barcelona Music Hack Day 2013
13th - 14th June 2013 - Sonar Festival (Sonar+D)

Neuroscience and Music (Special Track)

Get the chance to develop a new application interfacing music with the brain at the Barcelona Music Hack Day 2013!! We are looking for a new connection between neuroscience and music.

The Music Hack Day (MHD) is a 24 hour hacking session in which participants conceptualize, create and present their projects. Any Music Technology, i.e. software, mobile applications, hardware, artworks, web development, goes as long as it is music related. The MHD has been a great way to demonstrate the creativity around music that comes from the tech community. The past three years have seen more than 20 MHD events taking place around the world. Starting in London, it has spread across the world to Berlin, Amsterdam, Boston, Stockholm, San Francisco, Barcelona, New York, Sydney, Montreal... The MHD has gathered over 2000 participants, building hundreds of hacks and with over 125 music and tech companies supporting the events. The Music Technology Group (MTG) of Universitat Pompeu Fabra (UPF) has hosted the MHD in Barcelona since 2010, and currently it is organized in the frame of Sónar festival.

With the support of the EC funded project KiiCS (Knowledge Incubation in Innovation and Creation for Science) this year the Barcelona MHD will include a special neuroscience track that will aim at providing a set of useful tools and APIs to encourage hacks that bring together music, brain signals, Brain-Computer Interfaces, and other physiological sensors. Through this approach, we want to encourage the creation of new ways of music creation and interaction. In the same line, the MHD will offer a pre-event introductory workshop where the different hardware devices (BCI, Enobio, and other physiological sensors), which will be made available to participants of the MHD, and the related APIs will be presented to all participants interested on developing hacks within the neuroscience track.

This initiative is lead by the Music Technology Group (MTG) in collaboration with the Science Communication Observatory at UPF through the KiiCs project, and it is supported by the research group Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS), also from UPF, and by Starlab Barcelona SL.

*** Important dates ***
Registration period: from April 15th to May 15th
N+MHD Workshop: Wednesday, June 12th 2013
Barcelona MHD: June 13th and 14th 2013

Come and build the future of music and neuroscience!

The Barcelona MHD is organized by MTG-UPF in the frame of Sónar+D. Original idea by Dave Haynes

12 Mar 2013 - 19:07 | view
Seminar by Jordi Janer on music signal source separation

Jordi Janer, from the MTG, gives a talk on "Methods for Music Signal Source Separation of Professionally Produced Recordings" on Thursday February 14th 2013 at 15:30h in room 52.321 of the Communication-Poblenou Campus of the UPF.

Abstract:
This presentation addresses the topic of Music Signal Source Separation. We show the outcome of an industrial joint-research project at the Music Technology Group of UPF. From an initial goal of removing the lead instrument from a professionally produced music recordings, we worked on a general framework for music signal modeling and separation. These methods introduce some novelties over the state-of-the-art, extending on approaches such as Non-negative Matrix Factoriztion (NMF). We present timbre classification for predominant pitch detection, vocal residual treatment, monophonic and polyphonic polytimbral source/filter models, harmonic/percussion separation. Our methods can be grouped in two different categories depending on the field of application: a) low-latency/low-computation and b) high-latency/high-computation. Several demos and potential uses of music source separation will be introduced in our talk.

Biography:
Jordi Janer is researcher at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona. His research interests cover: audio signal processing with a focus on the human voice, source separation, applications for real-time music interaction and environmental sound analysis and soundscape modelling. Graduated in Electronic Engineering (2000), he started his career as DSP engineer at Creamware GmbH, (Germany, 2000-2003), designing and developing audio effects and virtual synthesizers. Joining later the UPF, he obtained the PhD degree in 2008. As a visiting researcher, he stayed at McGill University (Canada, 2005) and at Northwestern University (USA, 2009). His activity as a researcher and project manager in the past years involves various public-funded research projects (2004-2013), and joint-research collaborations with Yamaha Corp. (Japan). He is also cofounder in 2011 of Voctro Labs, a spin-off company specialized on voice processing solutions for the audiovisual media industry.

11 Feb 2013 - 17:07 | view
New funded project to change the way we enjoy classical music concerts

PHENICX (Performances as Highly Enriched aNd Interactive Concert eXperiences) is a STREP project coordinated by the MTG in collaboration with TU-Delft and funded by the European Commision.

The project will make use of the state-of-the-art digital multimedia and internet technology to make the traditional concert experiences rich and universally accessible: concerts will become multimodal, multi-perspective and multilayer digital artefacts that can be easily explored, customized, personalized, (re)enjoyed and shared among the users. The main goal is twofold: (a) to make live concerts appealing to potential new audience and (b) to maximize the quality of concert experience for everyone.

PHENICX will last 36 months starting the 1st of February 2013 and the partner institutions involved are TU Delft, Universitäet of Linz (JKU), Stichting Koninklijk Concertgebouworkest (RCO), VideoDock BV (VD), Oesterreichische Studiengesellschaft Fuer Kybernetik (OFAI) and Escola Superior de Música de Catalunya (ESMUC)

The MTG team, coordinated by Emilia Gómez and Alba B. Rosado, will bring its expertise on audio processing (Jordi Janer & Jordi Bonada), music information retrieval (Agustín Martorell & Juanjo Bosch) and music interaction (Carles Fernández & Sergi Jordà) to work in different research challenges such as source separation, acoustic rendering, music visualization and gesture-based music interaction.

6 Feb 2013 - 21:48 | view
New funded project to work on traditional music repertoires

SIGMUS, SIGnal Analysis for the Discovery of Traditional MUSic Repertories, is a new project of the MTG funded by the Spanish Ministry of Economy and Competitiveness. SIGMUS will last 36 months starting the 1st of February 2013 and will focus on the study of the melodic and rhythmic characteristics of Flamenco and Arab-Andalusian music repertoires by applying audio processing and semantic analysis methodologies.


5 Feb 2013 - 13:43 | view
Seminar by Bill Verplank on sketching metaphors

Bill Verplank, from CCRMA, will give a seminar on "Sketching Metaphors" on Thursday, February 7th, at 3:30pm in room 52.321.

Abstract:
In this seminar I will describe (sketch) some metaphors I have used to provide a framework for Interaction Design - examples will be drawn from the course that Max Mathews and I developed at CCRMA on designing music controllers (NIME).

About Bill Verplank:
Bill Verplank is a human factors engineer and designer educated in ME at Stanford and MIT. After four years teaching design at Stanford he spent 22 years in industry: at Xerox (user-interface) IDEO (product design) and Interval Research (haptics). He has been active as a visiting lecturer at Stanford (ME, CS, CCRMA), ID/IIT, TU/e, IDII, CIID and professionally in ACM: SIGCHI, DIS, TEI, NIME.

4 Feb 2013 - 18:15 | view
Web Interface Designer job at the MTG-UPF

At the MTG-UPF and in the context of the CompMusic project we are looking for a Web Interface Designer to be involved in the development of a system to browse and interact with audio collections. The system is an online web application that interfaces with musical data (audio, scores, editorial information) plus musical descriptions that are automatically obtained from the data.

The Web Designer will be responsible for the graphical and functional design elements of the system, creating and implementing attractive and effective website designs that provide the end user with an engaging experience.

Given that the work will involve many meetings and discussions with the researchers at the UPF, the candidate should live in the Barcelona area.

Required skills:

  • Experience in web and interface design, graphic design, web development, user interface design and user experience.
  • Have an innovative design approach to navigation and search of audiovisual media.
  • Experience in graphic development tools such as Photoshop, Illustrator or similars.
  • Software development skills using HTML/CSS/JS (recommended HTML5 and CSS3).
  • Proficiency in English.

Interested candidates should send a CV and examples of work done related to this job to Xavier Serra (xavier [dot] serra [at] upf [dot] edu (subject: Web%20Designer%20job) (email)).

25 Jan 2013 - 19:28 | view
Seminar by Geoffroy Peeters on annotating MIR corpora

Geoffroy Peeters, from IRCAM, will give a seminar on "Annotated MIR Corpora, MSSE search engine for music, Perceptual Tempo" on Thursday, January 24th, at 3:30pm in room 52.321.

Abstract:
In this talk I will focus on three recent topics studied at IRCAM.
 
The first concerns a proposal for the description of annotated MIR corpora. Considering that today, annotated MIR corpora are provided by various research labs or companies, each one using its own annotation methodology, concept definitions, and formats, it is essential to define precisely how annotations are supplied and described. We propose here a proposals for the axis against which corpora can be described.
 
The second concerns our experience in integrating music indexing technologies in a third-party search and navigation engine (Orange MSSE search engine). We explain the work performed for this in terms of – choice of the technology, - development of annotated corpora for training the systems, - HIM development, user tests.
 
The third concerns the estimation of perceptual tempo and the reduction of the so-called octave errors of tempo estimation algorithms. Using the data from Last-FM perceptual experiment, we model the relationship between a set of four audio features to the perceptual tempo using a GMM Regression technique. We show that this technique allows outperforming current tempo estimation algorithms.
 
References:
  • G. Peeters and K. Fort. "Towards a (better) definition of the description of annotated m.i.r. corpora," In Proc. of ISMIR, Porto, Portugal, October 2012.
  • G. Peeters, F. Cornu, D. Tardieu, C. Charbuillet, J. J. Burred, M. Ramona, M. Vian, V. Botherel, J.-B. Rault, and J.-P. Cabanal. "A multimedia search and navigation prototype, including music and video-clips," In Proc. of ISMIR, Porto, Portugal, October 2012.
  • G. Peeters and J. Flocon-Cholet. "Perceptual tempo estimation using gmm regression," In Proc. of ACM Multimedia/ MIRUM (Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies), Nara, Japan, October 2012.
18 Jan 2013 - 14:21 | view
Music for Cochlear Implants concert
9 Feb 2013

Saturday February 9th, 2013 at 12PM

Auditori CAIXA FORUM (Barcelona)

Av. Francesc Guardia 6-8, Barcelona

Free admission

 

We invite you to participate in a unique experience in which researchers and musicians come together for the hearing impaired.

musIC is a project related with the research about music perception with cochlear implant devices, and aims to understand how these devices can be further developed. The cochlear implant is a medical implanted device that has been designed mainly to restore the perception of speech sounds, but still has many limitations in musical listening. With this objective we are organizing a concert specially designed considering the limitations when listening into music with cochlear implants. The concert is also intended for the general public.

We will hear pieces played with different instruments and formations: a string quartet and flute, soprano, piano, guitar and ReacTable, an interactive electronic instrument developed by the Music Technology Group.

With this concert will try to better understand how music is perceived. Attendees can contribute to research by participating in a survey about this musical experience.

Compositions: Civilotti Alejandro, Alejandro Fränkel, Sergio Naddei, Luis Nogueira.

Organizers: Music Technology Group (Universitat Pompeu Fabra) and Phonos Foundation. With the support of: Advanced Bionics.

More information: http://phonos.upf.edu/music and http://phonos.upf.edu/blog

 

 

9 Jan 2013 - 18:56 | view
intranet