RIS ID

16143

Publication Details

Cheng, E., Burnett, I. S. & Ritz, C. H. (2006). Using spatial audio cues from speech excitation for meeting speech segmentation. 8th International Conference on Signal Processing: ICSP2006 Proceedings (pp. 3067-3070). USA: IEEE Press.

Abstract

Multiparty meetings generally involve stationary participants. Participant location information can thus be used to segment the recorded meeting speech into each speaker's 'turn' for meeting 'browsing'. To represent speaker location information from speech, previous research showed that the most reliable time delay estimates are extracted from the Hubert envelope of the linear prediction residual signal. The authors' past work has proposed the use of spatial audio cues to represent speaker location information. This paper proposes extracting spatial audio cues from the Hubert envelope of the speech residual for indicating changing speaker location for meeting speech segmentation. Experiments conducted on recordings of a real acoustic environment show that spatial cues from the Hubert envelope are more consistent across frequency subbands and can clearly distinguish between spatially distributed speakers, compared to spatial cues estimated from the recorded speech or residual signal.

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/ICOSP.2006.346086