Autonomous Generation of Soundscapes using Unstructured Sound Databases

TitleAutonomous Generation of Soundscapes using Unstructured Sound Databases
Publication TypeMaster Thesis
Year of Publication2009
AuthorsFinney, N.
preprint/postprint documentstatic/media/Finney-Nathan-Master-Thesis-2009.pdf
AbstractThis research focuses on the generation of soundscapes using unstructured sound databases for the soni cation of virtual environments. A generalized methodology for design based on soundscape categorization, perceptual discrimination of sources and media design principles is proposed, with the underlying principle of the composition of a source and a textural layer within any soundscape. A generative model is proposed based on these principles covering sound object retrieval, segmentation, parameterization and resynthesis. The model incorporates wavelet resynthesis, sample playback and a technique for concatenative synthesis using an MFCC-based BIC segmentation method. Principles for optimal grain size selection with respect to source layer content are discussed, and the concatenation of segments is based on a relative MFCC Euclidean distance calculation. An implementation of the model using a photorealistic panoramic image in an urban context is described, using a sound database of community-provided recordings. The implementation utilizes sample playback and concatenative synthesis in order to maximally preserve the contextual attributes of the photorealistic environment, while wavelet resynthesis is discussed as a potential avenue for further development. The methods of classifi cation, segmentation and synthesis adhering to the particular application are discussed, along with a validation of the model using a subjective evaluation. The results of the study demonstrate the applicability of the design principles to an autonomous generation engine, while highlighting some of the challenges of implementation for autonomous functionality related to retrieval, segmentation and synthesis parameterization.