News and Events

Software developer position at the MTG-UPF

In this position you will be working together with researchers at the MTG-UPF in Barcelona to (1) develop prototypes demonstrating novel ideas, algorithms, and interaction paradigms related to music information processing and (2) implement novel algorithms for processing and interacting with music related data.

Required skills/qualifications:
Bachelor degree in Computer Science, with 2+ years of practical experience (Master degree is a plus)
Proficiency in both written and spoken English
Proficiency in at least two of PHP, Python, C/C++
Experience with UNIX systems
Experience in the development of Web applications including front-end programming

Preferred skills/experience:
Fast learner with strong problem-solving and analytical skills
Proven ability to develop software solutions for complex design/implementation problems
Familiarity with concepts of signal processing, data mining and machine learning
Experience in working with databases and large datasets
Familiarity with MATLAB

ABOUT MTG-UPF
The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines. For more information on MTG-UPF please visit http://mtg.upf.edu

HOW TO APPLY?

Interested people should send a resume as well as an introduction letter to mtg-info [at] llista [dot] upf [dot] edu


23 Apr 2012 - 14:09 | view
Phonos Concert: Sergio Naddei

On Thursday April 26th 2012 at 19:30h in the Espai Polivalent of the Communication Campus of the UPF, Phonos is organizing a concert of Sergio Naddei.

20 Apr 2012 - 11:26 | view
MTG participates in the "Fira de Recerca en Directe". Barcelona April 24 - 26

MTG participates in the 10th edition of the Barcelona scientific fair "Fira de Recerca en Directe". This fair is organized by CatalunyaCaixa at La Pedrera, and shows different projects of research groups or institutes from Barcelona.

The aim of the fair is to disseminate some of the research activity that is being done in Barcelona to the general society, and is specially focused on fostering scientific interest amongst young students.

MTG will have a stand where Dmitry Bogdanov will be presenting The Musical Avatar, and Martí Umbert will show the Vocaloid project. The audience will also have the opportunity to view and interact with some demos.

Information:
Dates: April 24th, 25th and 26th
Time: 10am to 2am and 4am to 8am
Place: Sala Gaudi at La Pedrera
Address: Passeig de Gràcia, 92, Barcelona

19 Apr 2012 - 18:20 | view
Ferdinand Fuhrmann defends his PhD thesis on April 25th

Ferdinand Fuhrmann defends his PhD thesis entitled "Automatic Musical Instrument Recognition from Polyphonic Music Audio Signals" on Wednesday 25th of April 2012 at 09:00h in room 55.309.

The jury members of the defense are: Gaël Richard (TELECOM ParisTech), Emilia Gómez (UPF), Josep Lluís Arcos (IIIA-CSIC).

Abstract: Facing the rapidly growing amount of digital media, the need for an effective data management is challenging technology. In this context, we approach the problem of automatically recognising musical instruments from music audio signals. Information regarding the instrumentation is among the most important semantic concepts humans use to communicate musical meaning. Hence, knowledge regarding the instrumentation eases a meaningful description of a music piece, indispensable for approaching the aforementioned need with modern (music) technology. Nonetheless, the addressed problem may sound elementary or basic, given the competence of the human auditory system. However, during at least two decades of study, while being tackled from various perspectives, the problem itself has been proven to be highly complex; no system has yet been presented that is even getting close to a human-comparable performance. Especially the problem of resolving multiple simultaneous sounding sources poses the main difficulties to the computational approaches.

In this dissertation we present a general purpose method for the automatic recognition of musical instruments from music audio signals. Unlike many related approaches, our specific conception mostly avoids laboratory constraints on the method’s algorithmic design, its input data, or the targeted application context. In particular, the developed method models 12 instrumental categories, including pitched and percussive instruments as well as the human singing voice, all of them frequently adopted in Western music. To account for the assumable complex nature of the input signal, we limit the most basic process in the algorithmic chain to the recognition of a single predominant musical instrument from a short audio fragment. By applying statistical pattern recognition techniques together with properly designed, extensive datasets we predict one source from the polytimbral timbre of the analysed sound and thereby prevent the method from resolving the mixture. To compensate for this restriction we further incorporate information derived from a hierarchical music analysis; we first utilise musical context to extract instrumental labels from the time-varying model decisions. Second, the method incorporates information regarding the piece’s formal aspects into the recognition process. Finally, we include information from the collection level by exploiting associations between musical genres and instrumentations.

In our experiments we assess the performance of the developed method by applying a thorough evaluation methodology using real music signals only, estimating the method’s accuracy, generality, scalability, robustness, and efficiency. More precisely, both the models’ recognition performance and the label extraction algorithm exhibit reasonable, thus expected accuracies given the problem at hand. Furthermore, we demonstrate that the method generalises well in terms of the modelled categories and is scalable to any kind of input data complexity, hence it provides a robust extraction of the targeted information. Moreover, we show that the information regarding the instrumentation of a Western music piece is highly redundant, thus enabling a great reduction of the data to analyse. Here, our best settings lead to a recognition performance of almost 0.7 in terms of the applied F-score from less than 50% of the input data. At last, the experiments incorporating the information on the musical genre of the analysed music pieces do not show the expected improvement in recognition performance, suggesting that a more fine-grained instrumental taxonomy is needed for exploiting this kind of information.

17 Apr 2012 - 23:32 | view
Participation to FMA 2012

Emilia Gómez and Justin Salamon participate to the III Interdisciplinary Conference on Flamenco Research (INFLA) and II International Workshop of Folk Music Analysis (FMA), which take place in Seville from April 19th to the 20th, 2012. The articles presented are:

  •  Emilia Gómez, Jordi Bonada and Justin Salamon: "Automatic Transcription of Flamenco Singing from Monophonic and Polyphonic Music Recordings"
  • Paco Gomez, Aggelos Pikrakis, Joaquin Mora, Jose Miguel Diaz-Bañez, Emilia Gomez, Francisco Escobar, Sergio Oramas and Justin Salamon:  "Automatic Detection of Melodic Patterns in Flamenco Singing by Analyzing Polyphonic Music Recordings"

The MTG also co-organizes a panel discussion in the context of the MIReS project:

  • "Technological challenges for the computational modeling of the world's musical heritage", co-organized by Polina Proutskova, and Emilia Gómez.  
16 Apr 2012 - 16:08 | view
Participation to AdMIRe 2012

Justin Salamon, Martin Haro, Mohamed Sordo and Xavier Serra participate to the 4th International Workshop on Advances in Music Information Research: "The Web of Music" (AdMIRe 2012) that takes place in Lyon on April 17th 2012, as part of the 21st International World Wide Web Conference (WWW 2102). The articles presented are:

14 Apr 2012 - 11:56 | view
Sounds of Barcelona in a EU contest

Our Sounds of Barcelona project is participating in a EU contest for best innovations related to technological iniciatives with an educational and social impact. The  project Sons de Barcelona was created as an educational initiative around Freesound.org, running workshops in schools to foster interest in music technologies among the students community by using the Freesound ideas and technologies. You can find more information about the contest entry in: http://engageawards.com/entry/68

If you think that Sounds of Barcelona is a good initiative worth promoting, please vote for it.

4 Apr 2012 - 10:35 | view
Freesound survey

As part on our research work in understanding the Freesound community for improving the platform that suports it, we are conducting a small survey to better understand the user's motivations for using Freesound. The survey and the user responses are in the Freesound Forum.

21 Mar 2012 - 11:35 | view
Phonos Concert: with Carlos Vaquero
On Tuesday March 20th 2012 at 19:30h in the Espai Polivalent of the Communication Campus of the UPF, Phonos is organizing a concert with Carlos Vaquero.
15 Mar 2012 - 16:50 | view
Seminar by Andre Holzapfel on beat tracking

Andre Holzapfel, from INESC Porto, gives a talk entitled "Selective sampling for beat tracking evaluation" on Friday March 2nd at 4pm in room 55.410.

Abstract: An approach is presented that identifies music samples which are difficult for current state-of-the-art beat trackers. In order to estimate this difficulty even for examples without ground truth, a method motivated by selective sampling is applied. This method assigns a degree of difficulty to a sample based on the mutual dis-agreement between the output of various beat tracking systems. On a large beat annotated dataset we show that this mutual agreement is correlated with the mean performance of the beat trackers evaluated against the ground truth, and hence can be used to identify difficult examples by predicting poor beat tracking performance. Towards the aim of advancing future beat tracking systems, we form a new dataset containing a high proportion of challenging music examples based on our method. We analyze the relations between perceptual difficulty and difficulty for automatic beat tracking using these data, and propose which signal properties are characterized by the highest potential of improvement of automatic beat tracking.

24 Feb 2012 - 10:58 | view
intranet