|Abstract||Computational approaches that conform to the cultural context are of paramount importance in music information research. The current state-of-the-art has a limited view of such context, which manifests in our ontologies, data-, cognition- and interaction-models that are biased to the market-driven popular music. In a step towards addressing this, the thesis draws upon multimodal data sources concerning art music traditions, extracting culturally relevant and musically meaningful information about melodic intervals from each of them and structuring it with formal knowledge representations. As part of this, we propose novel approaches to describe intonation in audio music recordings and to use and adapt the semantic web infrastructure to complement this with the knowledge extracted from text data. Due to the complementary nature of the data sources, structuring and linking the extracted information results in a symbiosis mutually enriching their information. Over this multimodal knowledge base, we propose similarity measures for the discovery of musical entities, yielding a culturally-sound navigation space.