The environmental challenges the world faces have never been greater or more complex. Global areas that are covered by forests and urban woodlands are threatened by large-scale forest fires that have increased dramatically during the last decades in Europe and worldwide, in terms of both frequency and magnitude. To this end, rapid advances in remote sensing systems including ground-based, unmanned aerial vehicle-based and satellite-based systems have been adopted for effective forest fire surveillance. In this paper, the recently introduced 360-degree sensor cameras are proposed for early fire detection, making it possible to obtain unlimited field of view captures which reduce the number of required sensors and the computational cost and make the systems more efficient. More specifically, once optical 360-degree raw data are obtained using an RGB 360-degree camera mounted on an unmanned aerial vehicle, we convert the equirectangular projection format images to stereographic images. Then, two DeepLab V3+ networks are applied to perform flame and smoke segmentation, respectively. Subsequently, a novel post-validation adaptive method is proposed exploiting the environmental appearance of each test image and reducing the false-positive rates. For evaluating the performance of the proposed system, a dataset, namely the "Fire detection 360-degree dataset", consisting of 150 unlimited field of view images that contain both synthetic and real fire, was created. Experimental results demonstrate the great potential of the proposed system, which has achieved an F-score fire detection rate equal to 94.6%, hence reducing the number of required sensors. This indicates that the proposed method could significantly contribute to early fire detection.