View invariance in facial motion
Motion may play a large role in the generation of face representations. Facial movement has been shown to facilitate recognition, categorisation of identity, and gender judgment. Dynamic information can be isolated from spatial information by driving a 3-D computer-rendered facial model with the movements of an actor. It has been shown that the perception of static faces is viewpoint-dependent (Hill et al, 1997 Cognition 62 201 - 222). To investigate viewpoint-dependence in dynamic faces an avatar was animated by using actor's movements. Subjects were shown a full-face facial movement. They were then asked to judge which of two rotated moving avatars matched the first face. Test view, orientation, and the type of movement (rigid + nonrigid versus nonrigid) were manipulated. Nonrigid movement alone produced an advantage for upright faces and no effect of view. Rigid and nonrigid movement presented together produced an advantage for upright faces and a decline in performance for larger test rotations. No interaction was found. This suggests that nonrigid facial movement is represented in a viewpoint-invariant manner while the addition of rigid-head movements encourages a more-viewpoint-dependent encoding.
Please refer to publisher version or contact your library.