Judging sex and identity from isolated facial motion
Observers made judgements about the sex and identity of an average computer head model animated using 3D motion capture data. The animations differed only in their movement, other cues being constant. Despite the absence of most cues observers were able to judge sex at levels significantly better than chance (Mean 61%, SE 2%. t(15) = 6.5, p<<.05). This finding was replicated using a 2-AFC task (59%, 2%. t(13) = 5.0 p<<.05). Performance with animations played backwards (54%, 3%. t(13) = 1.3, p>.1) or upside-down (57%, 4%. t(13) = 1.9, p=.08) was near chance. Motion provides useful information for sex judgements and this information is at the level of global facial movement rather than individual frames or local image velocities. Observers could also sort animations according to identity--putting different examples of the same person together--significantly better than chance (t(15)= 5.3, p<<.05). They could also identify the odd-one-out of three better than chance (57%, 3%. t(11) = 8.09, p<<.05). This task was disrupted by inversion in the image plane (50%, 3%. t(11) = 4.4, p<<.05) but not by playing backwards (57%, 3%. p>.1). With both manipulations performance remained above chance (t(11) = 5.7, p<<.05 and t(11) = 7.7, p<<.05 respectively). Motion must provide cues to identity that are common between different examples of the same person but different between individuals. This information is disrupted by inversion but not by being played backwards. Thus a simple representation of facial movement provides useful information for face processing tasks. The inversion effects implicate face processing mechanisms. Whereas subjects may be using low-level motion cues to some extent in the identity task, the sex judgement task appears to depend upon sensing dynamic changes in facial configuration.