Biblio

Export 57 results:
Sort by: Author Title [ Type  (Asc)] Year
Filters: First Letter Of Title is R  [Clear All Filters]
Book
Book Chapter
Jordà, S. (2007).  The reactable. (Polotti, P., Rocchesso, D., Ed.).Sound to Sense, Sense to Sound – A state of the art in Sound and Music Computing. 490. Abstract
Roma, G., & Herrera P. (2013).  Representing Music as Work in Progress. Structuring Music through Markup Language: Designs and Architectures. 119-134.
Conference Paper
Pons, J., & Serra X. (2019).  Randomly weighted CNNs for (music) audio classification. 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2019). Abstract
Jordà, S. (2006).  The reacTable. International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH.
Jordà, S. (2010).  The Reactable : Tangible and Tabletop Music Performance. CHI 2010: 28th ACM Conference on Human Factors in Computing Systems. 2989-2994 . Abstract
Giraldo, S., & Ramirez R. (2013).  Real Time Modeling of Emotions by Linear Regression. MML 2013: Internatinal Workshop on Machine Learning and Music, ECML/PKDD .
Papiotis, P., & Purwins H. (2010).  Real-time Accompaniment using lyrics-matching QBH. 7th International Symposium on Computer Music Modeling and Retrieval (CMMR). 279-280. Abstract
Porcaro, L., & Saggion H. (2019).  Recognizing Musical Entities in User-generated Content. International Conference on Computational Linguistics and Intelligent Text Processing (CICLing) 2019. Abstract
Roma, G., Nogueira W., & Herrera P. (2013).  Recurrence Quantification Analysis Features for Environmental Sound Recognition. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). 1-4.
Väljamäe, A., Mealla S., Steffert T., Holland S., Marimon X., Oliveira A., et al. (2013).  A Review Of Real-Time Eeg Sonification Research. The 19th International Conference on Auditory Display (ICAD-2013).
intranet