Most of the established neural network architectures in computer vision are essentially composed of the same building blocks (e.g., convolutional, normalization, regularization, pooling layers, etc.), with their main difference being the connectivity of these components within the architecture and not the components themselves. In this paper we propose a generalization of the traditional average pooling operator. Based on the requirements of efficiency (to provide information without repetition), equivalence (to be able to produce the same output as average pooling) and extendability (to provide a natural way of obtaining novel information), we arrive at a formulation that generalizes average pooling using the Zernike moments. Experimental results on Cifar 10 , Cifar 100 and Rotated MNIST data-sets showed that the proposed method was able to outperform the two baseline approaches, global average pooling and average pooling 2x2, as well as the two variants of Stochastic pooling and AlphaMEX in every case. A worst-case performance analysis on Cifar-100 showed that significant gains in classification accuracy can be realised with only a modest 10% increase in training time.