|Abstract||Measuring music similarity is essential for multimedia retrieval. From a content-based point of view, this task can be regarded as obtaining a suitable distance measurement between songs defined on a certain feature space. In this paper, we propose three such distance measures. First, a low-level measure based on tempo-related aspects. Second, a high-level semantic measure based on regression by support vector machines producing different groups of musical dimensions such as genre and culture, moods and instruments, or rhythm and tempo. Finally, a third distance, a hybrid measure which combines the above-mentioned distance measures with two state-of-the-art low-level measures: an Euclidean distance based on principal component analysis of timbral, temporal, and tonal descriptors, and a timbral distance based on single Gaussian MFCC modeling. We evaluate our proposed measures against a number of state-of-the-art measures. We do this objectively based on a comprehensive set of ground truth musical collections, and subjectively by means of listeners’ playlist similarity and inconsistency ratings. Results show that, in spite of being conceptually different, the presented methods achieve performance comparable to the considered baseline approaches in the case of low-level tempo-based and semantic classifier-based measures, or even higher performance in the case of the hybrid distance. Furthermore, they open up the possibility to explore distance metrics that are based on truly semantic notions.