University of Wollongong
Browse

Using spatial audio cues from speech excitation for meeting speech segmentation

Download (213.77 kB)
conference contribution
posted on 2024-11-15, 19:38 authored by Eva Cheng, Ian Burnett, Christian RitzChristian Ritz
Multiparty meetings generally involve stationary participants. Participant location information can thus be used to segment the recorded meeting speech into each speaker's 'turn' for meeting 'browsing'. To represent speaker location information from speech, previous research showed that the most reliable time delay estimates are extracted from the Hubert envelope of the linear prediction residual signal. The authors' past work has proposed the use of spatial audio cues to represent speaker location information. This paper proposes extracting spatial audio cues from the Hubert envelope of the speech residual for indicating changing speaker location for meeting speech segmentation. Experiments conducted on recordings of a real acoustic environment show that spatial cues from the Hubert envelope are more consistent across frequency subbands and can clearly distinguish between spatially distributed speakers, compared to spatial cues estimated from the recorded speech or residual signal.

History

Citation

Cheng, E., Burnett, I. S. & Ritz, C. H. (2006). Using spatial audio cues from speech excitation for meeting speech segmentation. 8th International Conference on Signal Processing: ICSP2006 Proceedings (pp. 3067-3070). USA: IEEE Press.

Parent title

International Conference on Signal Processing Proceedings, ICSP

Volume

4

Language

English

RIS ID

16143

Usage metrics

    Categories

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC