Abstract
Real-world scenes comprise a blooming, buzzing confusion of information. To manage this complexity, visual attention is guided to important scene regions in real time1,2,3,4,5,6,7. What factors guide attention within scenes? A leading theoretical position suggests that visual salience based on semantically uninterpreted image features plays the critical causal role in attentional guidance, with knowledge and meaning playing a secondary or modulatory role8,9,10,11. Here we propose instead that meaning plays the dominant role in guiding human attention through scenes. To test this proposal, we developed ‘meaning maps’ that represent the semantic richness of scene regions in a format that can be directly compared to image salience. We then contrasted the degree to which the spatial distributions of meaning and salience predict viewers’ overt attention within scenes. The results showed that both meaning and salience predicted the distribution of attention, but that when the relationship between meaning and salience was controlled, only meaning accounted for unique variance in attention. This pattern of results was apparent from the very earliest time-point in scene viewing. We conclude that meaning is the driving force guiding attention through real-world scenes.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
References
Land, M. F. & Hayhoe, M. M. In what ways do eye movements contribute to everyday activities? Vision Res. 41, 3559–3565 (2001).
Hayhoe, M. M. & Ballard, D. Eye movements in natural behavior. Trends Cogn. Sci. 9, 188–194 (2005).
Henderson, J. M. Human gaze control during real-world scene perception. Trends Cogn. Sci. 7, 498–504 (2003).
Henderson, J. M. Gaze control as prediction. Trends Cogn. Sci. 21, 15–23 (2017).
Buswell, G. T. How People Look at Pictures (Univ. Chicago Press, Chicago, 1935).
Yarbus, A. L. Eye Movements and Vision (Plenum, 1967).
Henderson, J. M. & Hollingworth, A. High-level scene perception. Annu. Rev. Psychol. 50, 243–271 (1999).
Itti, L. & Koch, C. Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194–203 (2001).
Parkhurst, D., Law, K. & Niebur, E. Modelling the role of salience in the allocation of visual selective attention. Vision Res. 42, 107–123 (2002).
Borji, A., Sihite, D. N. & Itti, L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22, 55–69 (2013).
Walther, D. & Koch, C. Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006).
Henderson, J. M. Regarding scenes. Curr. Dir. Psychol. Sci. 16, 219–222 (2007).
Itti, L., Koch, C. & Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell 20, 1254–1259 (1998).
Harel, J., Koch, C. & Perona, P. in Advances in Neural Information Processing Systems (NIPS 2006) Vol. 19, 1–8 (2006).
Potter, M. Meaning in visual search. Science 187, 965–966 (1975).
Biederman, I. Perceiving real-world scenes. Science 177, 77–80 (1972).
Wolfe, J. M. & Horowitz, T. S. Five factors that guide attention in visual search. Nat. Hum. Behav. 1, 0058 (2017).
Tatler, B. W., Hayhoe, M. M., Land, M. F. & Ballard, D. H. Eye guidance in natural vision: reinterpreting salience. J. Vis. 11, 5 (2011).
Carmi, R. & Itti, L. The role of memory in guiding attention during natural vision. J. Vis. 6, 898–914 (2006).
Torralba, A., Oliva, A., Castelhano, M. S. & Henderson, J. M. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113, 766–786 (2006).
Elazary, L. & Itti, L. Interesting objects are visually salient. J. Vis. 8, 3.1–15 (2008).
Henderson, J. M., Brockmole, J. R., Castelhano, M. S. & Mack, M. in Eye Movements: A Window on Mind and Brain (eds. Van Gompel, R. P. G. et al.) 537–562 (Elsevier, Oxford, 2007).
Henderson, J. M., Malcolm, G. L. & Schandl, C. Searching in the dark: cognitive relevance drives attention in real-world scenes. Psychon. Bull. Rev. 16, 850–856 (2009).
Nuthmann, A. & Henderson, J. M. Object-based attentional selection in scene viewing. J. Vis. 10, 20 (2010).
Bylinskii, Z., Judd, T., Oliva, A., Torralba, A. & Durand, F. What do different evaluation metrics tell us about saliency models? Preprint at http://arxiv.org/abs/1604.03605 (2016).
Anderson, N. C., Donk, M. & Meeter, M. The influence of a scene preview on eye movement behavior in natural scenes. Psychon. Bull. Rev. 23, 1794–1801 (2016).
Anderson, N. C., Ort, E., Kruijne, W., Meeter, M. & Donk, M. It depends on when you look at it: salience influences eye movements in natural scene viewing and search early in time. J. Vis. 15, 9 (2015).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. 57, 289–300 (1995).
Einhäuser, W., Rutishauser, U. & Koch, C. Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli. J. Vis. 8, 2.1–19 (2008).
Oliva, A. & Torralba, A. Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155, 23–36 (2006).
Castelhano, M. S. & Henderson, J. M. The influence of color on the perception of scene gist. J. Exp. Psychol. Hum. Percept. Perform 34, 660–675 (2008).
Castelhano, M. S. & Henderson, J. M. Flashing scenes and moving windows: an effect of initial scene gist on eye movements. J. Vis. 3, 67a (2003).
Tatler, B. W. The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7, 4.1–17 (2007).
Acknowledgements
We thank the members of the UC Davis Visual Cognition Research Group for their feedback and comments. This research was partially funded by BCS-1636586 from the US National Science Foundation. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Author information
Authors and Affiliations
Contributions
J.M.H. conceived of and designed the study, and drafted and revised the manuscript. T.R.H. designed the study, collected and analysed the data, and revised the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Supplementary Information
Supplementary Methods, Supplementary Figures 1–6
Life Sciences Reporting Summary
Life Sciences Reporting Summary
Rights and permissions
About this article
Cite this article
Henderson, J.M., Hayes, T.R. Meaning-based guidance of attention in scenes as revealed by meaning maps. Nat Hum Behav 1, 743–747 (2017). https://doi.org/10.1038/s41562-017-0208-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-017-0208-0