Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search
Antonio Torralba, Aude Oliva, Monica Castelhano, John Henderson
Psychological Review, Vol. 113, No. 4. (October 2006), pp. 766-786.
Images.zip (Browse the images on LabelMe)
Fixation data.zip (Run showData.m)
The images used to train the context model are part of the LabelMe dataset:
1) people search (see all images, it will show thousands of thumbnails)
2) painting search (see all images)
3) mug search (see all images)
These datasets are available via the LabelMe wesite.
Summary of results
The full model presented here incorporated scene priors to modulate the salient regions taking into account the expected location of the target given its scene context. In the people search task, the two factors are combined resulting in a saliency map modulated by the task. For evaluating the performance of the models, we compared the locations fixated by 8 participants with a thresholded map.