Adversarial Training (AT) has so far been considered as the most effective approach for building robust computer vision architectures that can adequately handle adversarial attacks. However, its high complexity, rising from the need to repeatedly generate attacks against image batches over training epochs, has been a controlling factor to its generalized use. Therefore, to date, AT has only been used for training onevanilla datasets, casting doubt on how soon it can be applied to real world computer vision systems in safety critical domains. Few researchers have to a certain extent addressed the issue by proposing AT variations which have moderately improved the complexity. Extending these advances, this work explores simple yet effective ways to further improve the complexity of AT in terms of training time. Specifically, it is explored through this work whether a) training sequential adversarial epochs and b) attacking the entire dataset during adversarial epochs are necessary for a robust learned model.