GAze on TArget Dataset
GAze on TArget (GATA) dataset is a large-scale annotated gaze dataset, tailored for training deep learning architectures. It was created following the “target search” paradigm where subjects were asked to visually search for a specific object class. Forty eight different subjects participated in the recording procedure using myGaze capturing sensor.
Figure 1: Gaze annotation process
- The introduced dataset contains about 120.000 gaze annotated images.
- Based on the MSCOCO 2014 database.
- 80 object classes.
- 48 subjects, 41 males – 7 females, ages in the range 22-45.
- 238 experimental sessions – 20 minutes each.
- Each experimental session consists of about 85 image-group sessions (sets of six images-sixie).
- 20.000 image-group sessions in total.
Figure 2: Various classes heatmaps
Figure 3: Visualization of the implicit human response: fixation scan-path (left) and the corresponding heatmap (right)
The gaze annotations are provided with the json file format using the following naming convention.
The first part (objectID) denotes the target object class id and the second part (imageID) the COCO image id respectively. Inside the json file gaze points annotations, time stamp and x,y coordinates, are included as presented below.
Relevance Object Assessment based on Gaze
The proposed dataset was utilized for building a deep learning model capable of predicting objects in an image as relevant or non-relevant, based on gaze, according to the users’ preferences. The proposed model is presented below:
Figure 4: The proposed DL architecture for user relevance assessment, where DEi is the distance embedding for ith fixation and Oj is the jth object class.
You can download the dataset here.
Stavridis, K., Psaltis, A., Dimou, A., Papadopoulos G. Th., & Daras, P. (2019). Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment. In 2019 27th European Signal Processing Conference (EUSIPCO). IEEE.