A trajectory-based method for dynamic scene recognition

Publication Name

International Journal of Pattern Recognition and Artificial Intelligence

Abstract

Existing methods for dynamic scene recognition mostly use global features extracted from the entire video frame or a video segment. In this paper, a trajectory-based dynamic scene recognition method is proposed. A trajectory is formed by a pixel moving across consecutive frames of a video segment. The local regions surrounding the trajectory provide useful appearance and motion information about a portion of the video segment. The proposed method works at several stages. First, dense and evenly distributed trajectories are extracted from a video segment. Then, the fully-connected-layer features are extracted from each trajectory using a pre-trained Convolutional Neural Networks (CNNs) model, forming a feature sequence. Next, these feature sequences are fed into a Long-Short-Term-Memory (LSTM) network to learn their temporal behavior. Finally, by aggregating the information of the trajectories, a global representation of the video segment can be obtained for classification purposes. The LSTM is trained using synthetic trajectory feature sequences instead of real ones. The synthetic feature sequences are generated with a series of generative adversarial networks (GANs). In addition to classification, category-specific discriminative trajectories are located in a video segment, which help reveal what portions of a video segment are more important than others. This is achieved by formulating an optimization problem to learn discriminative part detectors for all categories simultaneously. Experimental results on two benchmark dynamic scene datasets show that the proposed method is very competitive with six other methods.

Open Access Status

This publication is not available as open access

Article Number

2150029

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1142/S0218001421500294