Neural Network Compression Using Higher-Order Statistics and Auxiliary Reconstruction Losses

Authors
C. Chatzikonstantinou
G. T. Papadopoulos
K. Dimitropoulos
P. Daras
Year
2020
Venue
In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Washington, United States, June, 2020.
Download

Abstract

In this paper, the problem of pruning and compressing the weights of various layers of deep neural networks is investigated. The proposed method aims to remove redundant filters from the network in order to reduce computational complexity and storage requirements, while improving the performance of the original network. More specifically, a novel filter selection criterion is introduced based on the fact that filters whose weights follow a Gaussian distribution correspond to hidden units that do not capture important aspects of data. To this end, Higher Order Statistics (HOS) are used and filters with low cumulant values that do not deviate significantly from Gaussian distribution are identified and removed from the network. In addition, a novel pruning strategy is proposed aiming to decide on the pruning ratio of each individual layer using the Shapiro-Wilk normality test. The use of auxiliary MSE losses (intermediate and after the softmax layer) during the fine-tuning phase further improves the overall performance of the compressed network. Extensive experiments with different network architectures and comparison with state-of-the-art approaches on well-known public datasets, such as CIFAR-10, CIFAR-100 and ILSCVR-12, demonstrate the great potential of the proposed approach.