Soundscape ModellingSoundscape design is beginning to receive considerable attention in virtual environments and interactive media developments. Current trends (e.g. online communities and games, web and mobile technologies and augmented-reality tourism platforms, 2D and 3D virtual cartography and urban design) might require new paradigms of soundscape design and interaction. The MTG technology for soundscape design is an online platform that aims at simplifying the authoring process, but offering at the same time a realistic and interactive soundscape. A sample-based synthesis algorithm is driven by graph models, where sound samples are retrieved from a user-contributed audio repository, Freesound. The synthesis engine runs on a server that gets position update messages and the soundscape is delivered to the client application as a web stream. The system provides standard format for soundscape design.
For a virtual tourism application developed within the Metaverse1 project, we implemented a soundscape client application, that acts as a proxy between the SecondLife application, the SecondLife virtual environment server and our streaming server. The communication between the SecondLife client and the server is intercepted and used to control our streaming server: when the SecondLife avatar enters the virtual world, a new streaming server listening client is created through the web API and the proxy application receives the listener id and the streaming URL, which are used in further communication with the streaming server. Listener position and rotation messages sent from the SecondLife client to the virtual world server are used to update the listener position in the soundscape generation and the streaming URL is passed on to the SecondLife client which renders the audio stream as part of the “music URL” associated with a region in the virtual world. To experience the ”Virtual Travel” soundscape, we need therefore to run the proxy client application. Also, the audio settings in the SecondLife Viewer should be configured accordingly to use the music stream as the main environmental sound. See the downloads page for more details.
- People: Stefan Kersten, Gerard Roma, Mattia Schirosa, Jordi Janer
- Contact: Jordi Janer < jordi dot janer at upf dot edu >
This technology has been developed as main outcome of Metaverse 1 project, which is an ITEA2 project and supported by: