Limb-based feature description of human motion
This paper proposes a novel limb-based technique for semantic description of motion capture data. The goal is to create a motion segmentation and classification technique that is easily extensible by recognizing the actions of a limb instead of the whole body. This provides a highly detailed metadata that can be extended as needed to include additional motion classes by either adding a new limb submotion or by defining a new full-body motion class that combines existing known limb movements. The results of the initial implementation for annotating the leg movements (forward and backward) of walking and running show that such a system is feasible, with annotation accuracy of more than 98%.