Nowadays, time-efficient search and retrieval of visually similar content has emerged as a great necessity, while at the same time it constitutes an outstanding research challenge. The latter is further reinforced by the fact that millions of images and videos are generated on a daily basis. In this context, deep hashing techniques, which aim at estimating a very low dimensional binary vector for characterizing each image, have been introduced for realizing realistically fast visual-based search tasks. In this paper, a novel approach to deep hashing is proposed, which explicitly takes into account information about the object types that are present in the image. For achieving this, a novel layer has been introduced on top of current Neural Network (NN) architectures that aims to generate a reliability mask, based on image semantic segmentation information. Thorough experimental evaluation, using four datasets, proves that incorporating local-level information during the hash code learning phase significantly improves the similar retrieval results, compared to state-of-art approaches.