University of Wollongong
Browse

A voice interface for sound generators: adaptive and automatic mapping of gestures to sound

Download (757.15 kB)
conference contribution
posted on 2024-11-13, 15:14 authored by Stefano Fasciani, Lonce Wyse
Sound generators and synthesis engines expose a large set of parameters, allowing run-time timbre morphing and exploration of sonic space. However, control over these high-dimensional interfaces is constrained by the physical limitations of performers. In this paper we propose the exploitation of vocal gesture as an extension or alternative to traditional physical controllers. The approach uses dynamic aspects of vocal sound to control variations in the timbre of the synthesized sound. The mapping from vocal to synthesis parameters is automatically adapted to information extracted from vocal examples as well as to the relationship between parameters and timbre within the synthesizer. The mapping strategy aims to maximize the breadth of the explorable perceptual sonic space over a set of the synthesizer's real-valued parameters, indirectly driven by the voice-controlled interface.

History

Citation

Fasciani, S. & Wyse, L. 2012, 'A voice interface for sound generators: adaptive and automatic mapping of gestures to sound', Proceedings of the 12th Conference on New Interfaces for Musical Expression (NIME12), University of Michigan, United States, pp. 1-4.

Pagination

1-4

Language

English

RIS ID

107937

Usage metrics

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC