Motion as a cue for viewpoint invariance
Natural face and head movements were mapped onto a computer rendered three-dimensional average of 100 laser-scanned heads in order to isolate movement information from spatial cues and nonrigid movements from rigid head movements (Hill & Johnston, 2001). Experiment 1 investigated whether subjects could recognize, from a rotated view, facial motion that had previously been presented at a full-face view using a delayed match to sample experimental paradigm. Experiment 2 compared recognition for views that were either between or outside initially presented views. Experiment 3 compared discrimination at full face, three-quarters, and profile after learning at each of these views. A significant face inversion effect in Experiments 1 and 2 indicated subjects were using face-based information rather than more general motion or temporal cues for optimal performance. In each experiment recognition performance only ever declined with a change in viewpoint between sample and test views when rigid motion was present. Nonrigid, face-based motion appears to be encoded in a viewpoint invariant, object-centred manner, whereas rigid head movement is encoded in a more view specific manner.