Meaning-based guidance of attention in scenes as revealed by meaning maps

Published online:


Real-world scenes comprise a blooming, buzzing confusion of information. To manage this complexity, visual attention is guided to important scene regions in real time1,2,3,4,5,6,7. What factors guide attention within scenes? A leading theoretical position suggests that visual salience based on semantically uninterpreted image features plays the critical causal role in attentional guidance, with knowledge and meaning playing a secondary or modulatory role8,9,10,11. Here we propose instead that meaning plays the dominant role in guiding human attention through scenes. To test this proposal, we developed ‘meaning maps’ that represent the semantic richness of scene regions in a format that can be directly compared to image salience. We then contrasted the degree to which the spatial distributions of meaning and salience predict viewers’ overt attention within scenes. The results showed that both meaning and salience predicted the distribution of attention, but that when the relationship between meaning and salience was controlled, only meaning accounted for unique variance in attention. This pattern of results was apparent from the very earliest time-point in scene viewing. We conclude that meaning is the driving force guiding attention through real-world scenes.

  • Subscribe to Nature Human Behaviour for full access:



Additional access options:

Already a subscriber?  Log in  now or  Register  for online access.


  1. 1.

    Land, M. F. & Hayhoe, M. M. In what ways do eye movements contribute to everyday activities? Vision Res. 41, 3559–3565 (2001).

  2. 2.

    Hayhoe, M. M. & Ballard, D. Eye movements in natural behavior. Trends Cogn. Sci. 9, 188–194 (2005).

  3. 3.

    Henderson, J. M. Human gaze control during real-world scene perception. Trends Cogn. Sci. 7, 498–504 (2003).

  4. 4.

    Henderson, J. M. Gaze control as prediction. Trends Cogn. Sci. 21, 15–23 (2017).

  5. 5.

    Buswell, G. T. How People Look at Pictures (Univ. Chicago Press, Chicago, 1935).

  6. 6.

    Yarbus, A. L. Eye Movements and Vision (Plenum, 1967).

  7. 7.

    Henderson, J. M. & Hollingworth, A. High-level scene perception. Annu. Rev. Psychol. 50, 243–271 (1999).

  8. 8.

    Itti, L. & Koch, C. Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194–203 (2001).

  9. 9.

    Parkhurst, D., Law, K. & Niebur, E. Modelling the role of salience in the allocation of visual selective attention. Vision Res. 42, 107–123 (2002).

  10. 10.

    Borji, A., Sihite, D. N. & Itti, L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22, 55–69 (2013).

  11. 11.

    Walther, D. & Koch, C. Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006).

  12. 12.

    Henderson, J. M. Regarding scenes. Curr. Dir. Psychol. Sci. 16, 219–222 (2007).

  13. 13.

    Itti, L., Koch, C. & Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell 20, 1254–1259 (1998).

  14. 14.

    Harel, J., Koch, C. & Perona, P. in Advances in Neural Information Processing Systems (NIPS 2006) Vol. 19, 1–8 (2006).

  15. 15.

    Potter, M. Meaning in visual search. Science 187, 965–966 (1975).

  16. 16.

    Biederman, I. Perceiving real-world scenes. Science 177, 77–80 (1972).

  17. 17.

    Wolfe, J. M. & Horowitz, T. S. Five factors that guide attention in visual search. Nat. Hum. Behav. 1, 0058 (2017).

  18. 18.

    Tatler, B. W., Hayhoe, M. M., Land, M. F. & Ballard, D. H. Eye guidance in natural vision: reinterpreting salience. J. Vis. 11, 5 (2011).

  19. 19.

    Carmi, R. & Itti, L. The role of memory in guiding attention during natural vision. J. Vis. 6, 898–914 (2006).

  20. 20.

    Torralba, A., Oliva, A., Castelhano, M. S. & Henderson, J. M. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113, 766–786 (2006).

  21. 21.

    Elazary, L. & Itti, L. Interesting objects are visually salient. J. Vis. 8, 3.1–15 (2008).

  22. 22.

    Henderson, J. M., Brockmole, J. R., Castelhano, M. S. & Mack, M. in Eye Movements: A Window on Mind and Brain (eds. Van Gompel, R. P. G. et al.) 537–562 (Elsevier, Oxford, 2007).

  23. 23.

    Henderson, J. M., Malcolm, G. L. & Schandl, C. Searching in the dark: cognitive relevance drives attention in real-world scenes. Psychon. Bull. Rev. 16, 850–856 (2009).

  24. 24.

    Nuthmann, A. & Henderson, J. M. Object-based attentional selection in scene viewing. J. Vis. 10, 20 (2010).

  25. 25.

    Bylinskii, Z., Judd, T., Oliva, A., Torralba, A. & Durand, F. What do different evaluation metrics tell us about saliency models? Preprint at http://arxiv.org/abs/1604.03605 (2016).

  26. 26.

    Anderson, N. C., Donk, M. & Meeter, M. The influence of a scene preview on eye movement behavior in natural scenes. Psychon. Bull. Rev. 23, 1794–1801 (2016).

  27. 27.

    Anderson, N. C., Ort, E., Kruijne, W., Meeter, M. & Donk, M. It depends on when you look at it: salience influences eye movements in natural scene viewing and search early in time. J. Vis. 15, 9 (2015).

  28. 28.

    Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. 57, 289–300 (1995).

  29. 29.

    Einhäuser, W., Rutishauser, U. & Koch, C. Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli. J. Vis. 8, 2.1–19 (2008).

  30. 30.

    Oliva, A. & Torralba, A. Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155, 23–36 (2006).

  31. 31.

    Castelhano, M. S. & Henderson, J. M. The influence of color on the perception of scene gist. J. Exp. Psychol. Hum. Percept. Perform 34, 660–675 (2008).

  32. 32.

    Castelhano, M. S. & Henderson, J. M. Flashing scenes and moving windows: an effect of initial scene gist on eye movements. J. Vis. 3, 67a (2003).

  33. 33.

    Tatler, B. W. The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7, 4.1–17 (2007).

Download references


We thank the members of the UC Davis Visual Cognition Research Group for their feedback and comments. This research was partially funded by BCS-1636586 from the US National Science Foundation. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information


  1. Department of Psychology, University of California, Davis, CA, 95618, USA

    • John M. Henderson
  2. Center for Mind and Brain, University of California, Davis, CA, 95618, USA

    • John M. Henderson
    •  & Taylor R. Hayes


  1. Search for John M. Henderson in:

  2. Search for Taylor R. Hayes in:


J.M.H. conceived of and designed the study, and drafted and revised the manuscript. T.R.H. designed the study, collected and analysed the data, and revised the manuscript.

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to John M. Henderson.

Electronic supplementary material

  1. Supplementary Information

    Supplementary Methods, Supplementary Figures 1–6

  2. Life Sciences Reporting Summary

    Life Sciences Reporting Summary