GAze on TArget Dataset
GAze on TArget (GATA) dataset is a large-scale annotated gaze dataset, tailored for training deep learning architectures. It was created following the “target search” paradigm where subjects were asked to visually search for a specific object class. Forty eight different subjects participated in the recording procedure using myGaze capturing sensor.
The dataset provides 120.900 gaze search annotations for COCO images.
The gaze annotations are provided with the json file format using the following naming convention.
The first part (objectID) denotes the target object class id and the second part (imageID) the COCO image id respectively. Inside the json file gaze points annotations, time stamp and x,y coordinates, are included as presented below.
Relevance Object Assessment based on Gaze
The proposed dataset was utilized for building a deep learning model capable of predicting objects in an image as relevant or non-relevant, based on gaze, according to the users’ preferences. The proposed model is presented below:
You can download the dataset here.
Stavridis, K., Psaltis, A., Dimou, A., Papadopoulos G. Th., & Daras, P. (2019). Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment. In 2019 27th European Signal Processing Conference (EUSIPCO). IEEE.