Authors
|
A. Doumanoglou |
N. Vretos | |
P. Daras | |
Year
|
2019 |
Venue
|
Neurocomputing, 368, 34-50, 2019. |
Download
|
|
Slow Feature Analysis (SFA) is an unsupervised learning algorithm which extracts slowly varying features from a temporal vectorial signal. In SFA, feature slowness is measured by the average value of its squared time-derivative. In this paper, we introduce Frequency-Based Slow Feature Analysis (FSFA) and prove that it is a generalization of SFA in the frequency domain. In FSFA, the low pass filtered versions of the extracted slow features have maximum energy, making slowness a filter dependent measurement. Experimental results show that the extracted features depend on the selected filter kernel and are different than the signals extracted using SFA. However, it is proven that there is one filter kernel that makes FSFA equivalent to SFA. Furthermore, experiments on UCF-101 video action recognition dataset, showcase that the features extracted by FSFA, with proper filter kernels, result in improved classification performance when compared to the features extracted by standard SFA. Finally, an experiment on UCF-101, with an indicative, simple and shallow neural network, being composed of FSFA and SFA nodes, demonstrates that the previously mentioned network, can transform the features extracted by a known Convolutional Neural Network to a new feature space, where classification performance through Support Vector Machine can be improved.