Numerous psychological studies have shown that humans develop various stylistic patterns of motion behaviour, or dynamic signatures, which can be in general, or in some cases uniquely, associated with an individual. In a broad sense, such motion features provide a basis for non-verbal communication, or body language, and in more specific circumstances they combine to form a Dynamic Finger Print (DFP) of an individual, such as their gait, or walking pattern. A new modelling and classification approach for spatiotemporal human motions is proposed, and in particular the walking gait. The movements are obtained through a full body inertial motion capture suit, allowing unconstrained freedom of movements in natural environments. This involves a network of 16 miniature inertial sensors distributed around the body via a suit worn by the individual. Each inertial sensor provides (wirelessly) multiple streams of measurements of its spatial orientation, plus energy related: velocity, acceleration, angular velocity and angular acceleration. These are also subsequently transformed and interpreted as features of a dynamic biomechanical model with 23 degrees of freedom (DOF). This scheme provides an unparalleled array of ground-truth information with which to further model dynamic human motions compared to the traditional optically-based motion capture technologies. Using a subset of the available multidimensional features, several successful classification models were developed through a supervised machine learning approach. This chapter describes the approach, methods used together with several successful outcomes demonstrating: plausible DFP models amongst several individuals performing the same tasks, models of common motion tasks performed by several individuals, and finally a model to differentiate abnormal from normal motion behaviour. Future developments are also discussed by extending the range of features to also include the energy related attributes. In doing so, valuable future extensions are also possible in modelling, beyond the objective pose and dynamic motions of a human, to include the intent associated with each motion. This has become a key research area for the perception of motion within video multimedia, for improved Human Computer Interfaces (HCI), as well as its application directions to better animate more realistic behaviours for synthesised avatars.
Link to publisher version (URL)
The complete book is available from Intech Open science Publishers here.