Speaker discriminability for visual speech modes

RIS ID

32411

Publication Details

Kim, J, Davis, C, Kroos, C & Hill, HC, Speaker discriminability for visual speech modes, Interspeech 2009, p 2259-2262, Brighton, UK: Interspeech.

Abstract

Does speech mode affect recognizing people from their visual speech? We examined 3D motion data from 4 talkers saying 10 sentences (twice). Speech was in noise, in quiet or whispered. Principal Component Analyses (PCAs) were conducted and speaker classification was determined by Linear Discriminant Analysis (LDA). The first five PCs for the rigid motion and the first 10 PCs each for the non-rigid motion and the combined motion were input to a series of LDAs for all possible combinations of PCs that could be constructed using the retained PCs. The discriminant functions and classification coefficients were determined on the training data to predict the talker of the test data. Classification performance for both the in-noise and whispered speech modes were superior to the in-quiet one. Superiority of classification was found even if only the first PC (jaw motion) was used, i.e., measures of jaw motion when speaking in noise or whispering hold promise for bimodal person recognition or verification.

Please refer to publisher version or contact your library.

Share

COinS