Action Recognition via Motion Analysis
In this method, the first step of performing real-time object tracking is based on background subtraction. Initially, the algorithm grabs an image of the background, assuming that the frame is empty of subjects. Subsequently, while capturing each new frame, the background is subtracted by the newly acquired image and the resulted outcome is thresholded to form a foreground binary mask. This mask is further enhanced by the morphological operation of dilation and finally a connected component analysis is utilized to obtained the final foreground label map. This label map is used as a basis for object tracking. In this method, action recognition classification is performed by tracking the trajectory of the foreground bounding box centroid. This method classifies the action of the foreground object between walking and standing.
Action Recognition via Motion Tracking
This method constitutes a novel, low-complexity, method for action recognition from videos. Initially, a 3D Sobel filter is applied to the video volume resulting into a binary image with non-zero pixels in areas of motion. The non-zero valued pixels are spatially clustered using k-means and the most dominant centers of video motion are extracted. The centers are then tracked forming sparse trajectories, whose properties are later used to create a new feature type, namely the Histogram of Oriented Trajectories (HOT), describing the video. Feature vectors are finally passed to an AdaBoost classifier for classification. The proposed method reports competitive results in KTH and MuHAVi datasets, while remaining low in complexity and thus being suitable to be used in surveillance systems requiring low processing power. In the provided video, trajectory extraction and the corresponding classification result is presented. This work, (“Action Recognition From Videos using Sparse Trajectories”) has been published in the 7th International Conference on Imaging for Crime Detection and Prevention (ICDP-16), Madrid, 23-25 November, 2016.