|Abstract||In this long abstract, we present an algorithm for automatically annotating music with tags that is fast, scalable and relatively easy to implement. It uses acoustic similarity for propagating tags among audio items. The algorithm makes
use of a variety of acoustical features, ranging from spectral features, to rhythm, tonal and highlevel features (such as mood, genre, gender). These features are then transformed into a reduced d–dimensional space, and ﬁnally combined with tempo and semantic features. A k–Nearest Neighbor classiﬁer — with a modiﬁed weighting function and two different distance measures — is performed in order to propose tags to new music items.