LDS-Inspired Residual Networks

Authors
A. Dimou
D. Ataloglou
K. Dimitropoulos
F. Alvarez
P. Daras
Year
2019
Venue
IEEE Transactions on Circuits and Systems for Video Technology, 29(8), 2363-2375, 2019.
Download

Abstract

Residual Networks (ResNets) have introduced a milestone for the Deep Learning community due to their outstanding performance in diverse applications. They enable efficient training of increasingly deep networks, reducing the training difficulty and error. The main intuition behind them is that instead of mapping the input information, they are mapping a residual part of it. Since the original work, a lot of extensions have been proposed to improve information mapping. In this paper, a novel extension of the residual block is proposed inspired by Linear Dynamical Systems, called LDS-ResNet. Specifically, a new module is presented that improves mapping of residual information by transforming it in a hidden state and then mapping it back to the desired feature space using convolutional layers. The proposed module is utilized to construct multi-branch Residual blocks for Convolutional Neural Networks (CNNs). An exploration of possible architectural choices is presented and evaluated. Experimental results show that LDS-ResNet outperforms the original ResNet in image classification and object detection tasks on public datasets such as CIFAR-10/100, ImageNet, VOC and MOT2017. Moreover, its performance boost is complementary to other extensions of the original network such as pre-activation and bottleneck, as well as stochastic training and Squeeze-Excitation.