Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Stimulus- and goal-oriented frameworks for understanding natural vision


Our knowledge of sensory processing has advanced dramatically in the last few decades, but this understanding remains far from complete, especially for stimuli with the large dynamic range and strong temporal and spatial correlations characteristic of natural visual inputs. Here we describe some of the issues that make understanding the encoding of natural images a challenge. We highlight two broad strategies for approaching this problem: a stimulus-oriented framework and a goal-oriented one. Different contexts can call for one framework or the other. Looking forward, recent advances, particularly those based in machine learning, show promise in borrowing key strengths of both frameworks and by doing so illuminating a path to a more comprehensive understanding of the encoding of natural stimuli.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Fig. 1: Texture synthesis based on deep convolutional neural networks.
Fig. 2: Efficient coding strategies rely on self-generated movement.
Fig. 3: Beyond-pairwise statistics contribute to complex structure in natural images.
Fig. 4: Motion-sensitive neurons encode self-movement across the animal kingdom.
Fig. 5: DNNs reflect some, but not all, architectural and computational motifs found in neural circuits.


  1. 1.

    Gollisch, T. & Meister, M. Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron 65, 150–164 (2010).

    CAS  PubMed  PubMed Central  Google Scholar 

  2. 2.

    Schwartz, G. W. & Rieke, F. Nonlinear spatial encoding by retinal ganglion cells: when 1 + 1 !  = 2. J. Gen. Physiol. 138, 283–290 (2011).

    PubMed  PubMed Central  Google Scholar 

  3. 3.

    Demb, J. B. & Singer, J. H. Functional circuitry of the retina. Annu. Rev. Vis. Sci. 1, 263–289 (2015).

    PubMed  PubMed Central  Google Scholar 

  4. 4.

    Graham, N. V. Beyond multiple pattern analyzers modeled as linear filters (as classical V1 simple cells): useful additions of the last 25 years. Vision Res. 51, 1397–1430 (2011).

    PubMed  Google Scholar 

  5. 5.

    Rieke, F. & Rudd, M. E. The challenges natural images pose for visual adaptation. Neuron 64, 605–616 (2009).

    CAS  PubMed  Google Scholar 

  6. 6.

    Solomon, S. G. & Kohn, A. Moving sensory adaptation beyond suppressive effects in single neurons. Curr. Biol. 24, R1012–R1022 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  7. 7.

    Baddeley, R. et al. Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proc. Biol. Sci. 264, 1775–1783 (1997).

    CAS  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Creutzfeldt, O. D. & Nothdurft, H. C. Representation of complex visual stimuli in the brain. Naturwissenschaften 65, 307–318 (1978).

    CAS  PubMed  Google Scholar 

  9. 9.

    Smyth, D., Willmore, B., Baker, G. E., Thompson, I. D. & Tolhurst, D. J. The receptive-field organization of simple cells in primary visual cortex of ferrets under natural scene stimulation. J. Neurosci. 23, 4746–4759 (2003).

    CAS  PubMed  Google Scholar 

  10. 10.

    Stanley, G. B., Li, F. F. & Dan, Y. Reconstruction of natural scenes from ensemble responses in the lateral geniculate nucleus. J. Neurosci. 19, 8036–8042 (1999).

    CAS  PubMed  Google Scholar 

  11. 11.

    Vickers, N. J., Christensen, T. A., Baker, T. C. & Hildebrand, J. G. Odour-plume dynamics influence the brain’s olfactory code. Nature 410, 466–470 (2001).

    CAS  PubMed  Google Scholar 

  12. 12.

    Vinje, W. E. & Gallant, J. L. Sparse coding and decorrelation in primary visual cortex during natural vision. Science 287, 1273–1276 (2000).

    CAS  PubMed  Google Scholar 

  13. 13.

    Sharpee, T. O. et al. Adaptive filtering enhances information transmission in visual cortex. Nature 439, 936–942 (2006).

    CAS  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Theunissen, F. E. & Elie, J. E. Neural processing of natural sounds. Nat. Rev. Neurosci. 15, 355–366 (2014).

    CAS  PubMed  Google Scholar 

  15. 15.

    Zwicker, D., Murugan, A. & Brenner, M. P. Receptor arrays optimized for natural odor statistics. Proc. Natl Acad. Sci. USA 113, 5570–5575 (2016).

    CAS  PubMed  Google Scholar 

  16. 16.

    Carandini, M. et al. Do we know what the early visual system does? J. Neurosci. 25, 10577–10597 (2005).

    CAS  PubMed  Google Scholar 

  17. 17.

    David, S. V. & Gallant, J. L. Predicting neuronal responses during natural vision. Network 16, 239–260 (2005).

    PubMed  Google Scholar 

  18. 18.

    Turner, M. H. & Rieke, F. Synaptic rectification controls nonlinear spatial integration of natural visual inputs. Neuron 90, 1257–1271 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Heitman, A. et al. Testing pseudo-linear models of responses to natural scenes in primate retina. Preprint at bioRxiv (2016).

  20. 20.

    Maheswaranathan, N., Kastner, D. B., Baccus, S. A. & Ganguli, S. Inferring hidden structure in multilayered neural circuits. PLoS Comput. Biol. 14, e1006291 (2018).

    PubMed  PubMed Central  Google Scholar 

  21. 21.

    McIntosh, L. T., Maheswaranathan, N., Nayebi, A., Ganguli, S. & Baccus, S. A. Deep learning models of the retinal response to natural scenes. Adv. Neural Inf. Process. Syst. 29, 1369–1377 (2016).

    PubMed  PubMed Central  Google Scholar 

  22. 22.

    Felsen, G., Touryan, J., Han, F. & Dan, Y. Cortical sensitivity to visual features in natural scenes. PLoS Biol. 3, e342 (2005).

    PubMed  PubMed Central  Google Scholar 

  23. 23.

    Rust, N. C., Schwartz, O., Movshon, J. A. & Simoncelli, E. P. Spatiotemporal elements of macaque V1 receptive fields. Neuron 46, 945–956 (2005).

    CAS  PubMed  Google Scholar 

  24. 24.

    Eickenberg, M., Rowekamp, R. J., Kouh, M. & Sharpee, T. O. Characterizing responses of translation-invariant neurons to natural stimuli: maximally informative invariant dimensions. Neural Comput. 24, 2384–2421 (2012).

    PubMed  PubMed Central  Google Scholar 

  25. 25.

    Vintch, B., Movshon, J. A. & Simoncelli, E. P. A convolutional subunit model for neuronal responses in macaque V1. J. Neurosci. 35, 14829–14841 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  26. 26.

    Rowekamp, R. J. & Sharpee, T. O. Cross-orientation suppression in visual area V2. Nat. Commun. 8, 15739 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  27. 27.

    Pagan, M., Simoncelli, E. P. & Rust, N. C. Neural quadratic discriminant analysis: nonlinear decoding with V1-like computation. Neural Comput. 28, 1–29 (2016).

    Google Scholar 

  28. 28.

    Hyvärinen, A. Statistical models of natural images and cortical visual representation. Top. Cogn. Sci. 2, 251–264 (2010).

    PubMed  Google Scholar 

  29. 29.

    Lewicki, M. S., Olshausen, B. A., Surlykke, A. & Moss, C. F. Scene analysis in the natural environment. Front. Psychol. 5, 199 (2014).

    PubMed  PubMed Central  Google Scholar 

  30. 30.

    Simoncelli, E. P. & Olshausen, B. A. Natural image statistics and neural representation. Annu. Rev. Neurosci. 24, 1193–1216 (2001).

    CAS  PubMed  Google Scholar 

  31. 31.

    Zhaoping, L. Theoretical understanding of the early visual processes by data compression and data selection. Network 17, 301–334 (2006).

    PubMed  Google Scholar 

  32. 32.

    Coen-Cagli, R., Dayan, P. & Schwartz, O. Cortical surround interactions and perceptual salience via natural scene statistics. PLoS Comput. Biol. 8, e1002405 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  33. 33.

    Frazor, R. A. & Geisler, W. S. Local luminance and contrast in natural images. Vision Res. 46, 1585–1598 (2006).

    PubMed  Google Scholar 

  34. 34.

    Karklin, Y. & Lewicki, M. S. A hierarchical Bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Comput. 17, 397–423 (2005).

    PubMed  Google Scholar 

  35. 35.

    Parra, L., Spence, C. & Sajda, P. Higher-order statistical properties arising from the non-stationarity of natural signals. Adv. Neural Inf. Process. Syst. 14, 786–792 (2001).

    Google Scholar 

  36. 36.

    Ruderman, D. L. & Bialek, W. Statistics of natural images: Scaling in the woods. Phys. Rev. Lett. 73, 814–817 (1994).

    CAS  PubMed  Google Scholar 

  37. 37.

    Portilla, J. & Simoncelli, E. P. Parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vis. 40, 49–70 (2000).

    Google Scholar 

  38. 38.

    Gatys, L. A., Ecker, A. S. & Bethge, M. Texture synthesis using convolutional neural networks. Adv. Neural Inf. Process. Syst. 28, 262–270 (2015).

    Google Scholar 

  39. 39.

    Karras, T., Aila, T., Laine, S. & Lehtinen, J. Progressive Growing of GANs for Improved Quality, Stability, and Variation. Preprint at arXiv (2018).

  40. 40.

    Freeman, J., Ziemba, C. M., Heeger, D. J., Simoncelli, E. P. & Movshon, J. A. A functional and perceptual signature of the second visual area in primates. Nat. Neurosci. 16, 974–981 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  41. 41.

    Okazawa, G., Tajima, S. & Komatsu, H. Image statistics underlying natural texture selectivity of neurons in macaque V4. Proc. Natl Acad. Sci. USA 112, E351–E360 (2015).

    CAS  PubMed  Google Scholar 

  42. 42.

    Rust, N. C. & Dicarlo, J. J. Selectivity and tolerance (“invariance”) both increase as visual information propagates from cortical area V4 to IT. J. Neurosci. 30, 12978–12995 (2010).

    CAS  PubMed  PubMed Central  Google Scholar 

  43. 43.

    Atick, J. J. & Redlich, A. N. What does the retina know about natural scenes? Neural Comput. 4, 196–210 (1992).

    Google Scholar 

  44. 44.

    Srinivasan, M. V., Laughlin, S. B. & Dubs, A. Predictive coding: a fresh view of inhibition in the retina. Proc. R. Soc. Lond. B Biol. Sci. 216, 427–459 (1982).

    CAS  PubMed  Google Scholar 

  45. 45.

    Marr, D. & Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. B Biol. Sci. 207, 187–217 (1980).

    CAS  PubMed  Google Scholar 

  46. 46.

    Zhaoping, L. Understanding Vision: Theory, Models, and Data. (Oxford University Press, Oxford, UK, (2014).

  47. 47.

    Barlow, H.B. Possible principles underlying the transformations of sensory messages. in Sensory Communication (ed. W.A. Rosenblith) 217–234 (Wiley, Oxford, UK, 1961).

  48. 48.

    Attneave, F. Some informational aspects of visual perception. Psychol. Rev. 61, 183–193 (1954).

    CAS  PubMed  Google Scholar 

  49. 49.

    Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).

    Google Scholar 

  50. 50.

    Field, D. J. What is the goal of sensory coding? Neural Comput. 6, 559–601 (1994).

    Google Scholar 

  51. 51.

    Bhandawat, V., Olsen, S. R., Gouwens, N. W., Schlief, M. L. & Wilson, R. I. Sensory processing in the Drosophila antennal lobe increases reliability and separability of ensemble odor representations. Nat. Neurosci. 10, 1474–1482 (2007).

    CAS  PubMed  PubMed Central  Google Scholar 

  52. 52.

    Laughlin, S. A simple coding procedure enhances a neuron’s information capacity. Z. Naturforsch., C, Biosci. 36, 910–912 (1981).

    CAS  PubMed  Google Scholar 

  53. 53.

    Brinkman, B. A. W., Weber, A. I., Rieke, F. & Shea-Brown, E. How do efficient coding strategies depend on origins of noise in neural circuits? PLoS Comput. Biol. 12, e1005150 (2016).

    PubMed  PubMed Central  Google Scholar 

  54. 54.

    Gjorgjieva, J., Sompolinsky, H. & Meister, M. Benefits of pathway splitting in sensory coding. J. Neurosci. 34, 12127–12144 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  55. 55.

    Kastner, D. B., Baccus, S. A. & Sharpee, T. O. Critical and maximally informative encoding between neural populations in the retina. Proc. Natl Acad. Sci. USA 112, 2533–2538 (2015).

    CAS  PubMed  Google Scholar 

  56. 56.

    Field, D. J. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A 4, 2379–2394 (1987).

    CAS  PubMed  Google Scholar 

  57. 57.

    Ruderman, D. L. Origins of scaling in natural images. Vision Res. 37, 3385–3398 (1997).

    CAS  PubMed  Google Scholar 

  58. 58.

    Dan, Y., Atick, J. J. & Reid, R. C. Efficient coding of natural scenes in the lateral geniculate nucleus: experimental test of a computational theory. J. Neurosci. 16, 3351–3362 (1996).

    CAS  PubMed  Google Scholar 

  59. 59.

    Franke, K. et al. Inhibition decorrelates visual feature representation in the inner retina. Nature 542, 439–444 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  60. 60.

    Pitkow, X. & Meister, M. Decorrelation and efficient coding by retinal ganglion cells. Nat. Neurosci. 15, 628–635 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  61. 61.

    Vincent, B. T. & Baddeley, R. J. Synaptic energy efficiency in retinal processing. Vision Res. 43, 1283–1290 (2003).

    PubMed  Google Scholar 

  62. 62.

    Atick, J. J. Could information theory provide an ecological theory of sensory processing? Network 22, 4–44 (2011).

    PubMed  Google Scholar 

  63. 63.

    Li, Z. & Atick, J. J. Efficient stereo coding in the multiscale representation. Network 5, 157–174 (1994).

    Google Scholar 

  64. 64.

    Kuang, X., Poletti, M., Victor, J. D. & Rucci, M. Temporal encoding of spatial information during active visual fixation. Curr. Biol. 22, 510–514 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  65. 65.

    Segal, I. Y. et al. Decorrelation of retinal response to natural scenes by fixational eye movements. Proc. Natl Acad. Sci. USA 112, 3110–3115 (2015).

  66. 66.

    Boi, M., Poletti, M., Victor, J. D. & Rucci, M. Consequences of the oculomotor cycle for the dynamics of perception. Curr. Biol. 27, 1268–1277 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  67. 67.

    Hyvärinen, A., Hurri, J. & Hoyer, P. O. Natural Image Statistics: a Probabilistic Approach to Early Computational Vision. (Springer-Verlag, London, UK, 2009).

    Google Scholar 

  68. 68.

    Bell, A. J. & Sejnowski, T. J. The “independent components” of natural scenes are edge filters. Vision Res. 37, 3327–3338 (1997).

    CAS  PubMed  PubMed Central  Google Scholar 

  69. 69.

    Olshausen, B. A. & Field, D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).

    CAS  PubMed  Google Scholar 

  70. 70.

    Rehn, M. & Sommer, F. T. A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. J. Comput. Neurosci. 22, 135–146 (2007).

    PubMed  Google Scholar 

  71. 71.

    Eichhorn, J., Sinz, F. & Bethge, M. Natural image coding in V1: how much use is orientation selectivity? PLoS Comput. Biol. 5, e1000336 (2009).

    PubMed  PubMed Central  Google Scholar 

  72. 72.

    Golden, J. R., Vilankar, K. P., Wu, M. C. K. & Field, D. J. Conjectures regarding the nonlinear geometry of visual neurons. Vision Res. 120, 74–92 (2016).

    PubMed  Google Scholar 

  73. 73.

    Schwartz, O. & Simoncelli, E. P. Natural signal statistics and sensory gain control. Nat. Neurosci. 4, 819–825 (2001).

    CAS  PubMed  Google Scholar 

  74. 74.

    Karklin, Y. & Lewicki, M. S. Emergence of complex cell properties by learning to generalize in natural scenes. Nature 457, 83–86 (2009).

    CAS  PubMed  Google Scholar 

  75. 75.

    Lochmann, T., Ernst, U. A. & Denève, S. Perceptual inference predicts contextual modulations of sensory responses. J. Neurosci. 32, 4179–4195 (2012).

    CAS  PubMed  Google Scholar 

  76. 76.

    Rao, R. P. N. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999).

    CAS  PubMed  Google Scholar 

  77. 77.

    Spratling, M. W. Predictive coding as a model of response properties in cortical area V1. J. Neurosci. 30, 3531–3543 (2010).

    CAS  PubMed  Google Scholar 

  78. 78.

    Zhu, M. & Rozell, C. J. Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLoS Comput. Biol. 9, e1003191 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  79. 79.

    Berkes, P. & Wiskott, L. Slow feature analysis yields a rich repertoire of complex cell properties. J. Vis. 5, 579–602 (2005).

    PubMed  Google Scholar 

  80. 80.

    Cadieu, C. F. & Olshausen, B. A. Learning intermediate-level representations of form and motion from natural movies. Neural Comput. 24, 827–866 (2012).

    PubMed  Google Scholar 

  81. 81.

    Coen-Cagli, R. & Schwartz, O. The impact on midlevel vision of statistically optimal divisive normalization in V1. J. Vis. 13, 1–20 (2013).

    Google Scholar 

  82. 82.

    Hosoya, H. & Hyvärinen, A. A hierarchical statistical model of natural images explains tuning properties in V2. J. Neurosci. 35, 10412–10428 (2015).

    CAS  PubMed  Google Scholar 

  83. 83.

    Lee, H., Ekanadham, C. & Ng, A. Y. Sparse deep belief net model for visual area V2. Adv. Neural Inf. Process. Syst. 20, 873–880 (2008).

    Google Scholar 

  84. 84.

    Shan, H. & Cottrell, G. Efficient visual coding: from retina to V2. Preprint at arXiv (2013).

  85. 85.

    Dayan, P., Sahani, M. & Deback, G. Adaptation and Unsupervised Learning. Adv. Neural Inf. Process. Syst. 15, 237–244 (2003).

    Google Scholar 

  86. 86.

    Hinton, G. E. & Ghahramani, Z. Generative models for discovering sparse distributed representations. Phil. Trans. R. Soc. Lond. B 352, 1177–1190 (1997).

    CAS  Google Scholar 

  87. 87.

    Wainwright, M. J. & Simoncelli, E. P. Scale mixtures of Gaussians and the statistics of natural images. Adv. Neural Inf. Process. Syst. 12, 855–861 (2000).

    Google Scholar 

  88. 88.

    Coen-Cagli, R., Kohn, A. & Schwartz, O. Flexible gating of contextual influences in natural vision. Nat. Neurosci. 18, 1648–1655 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  89. 89.

    Li, Z. Contextual influences in V1 as a basis for pop out and asymmetry in visual search. Proc. Natl Acad. Sci. USA 96, 10530–10535 (1999).

    CAS  PubMed  Google Scholar 

  90. 90.

    Lettvin, J. Y., Maturana, H. R., McCulloch, W. S. & Pitts, W. H. What the frog’s eye tells the frog’s brain. Proc. IRE 47, 1940–1951 (1959).

    Google Scholar 

  91. 91.

    Masland, R. H. & Martin, P. R. The unsolved mystery of vision. Curr. Biol. 17, R577–R582 (2007).

    CAS  PubMed  Google Scholar 

  92. 92.

    Nath, A. & Schwartz, G. W. Cardinal orientation selectivity is represented by two distinct ganglion cell types in mouse retina. J. Neurosci. 36, 3208–3221 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  93. 93.

    Schwartz, G., Harris, R., Shrom, D. & Berry, M. J. II Detection and prediction of periodic patterns by the retina. Nat. Neurosci. 10, 552–554 (2007).

    CAS  PubMed  PubMed Central  Google Scholar 

  94. 94.

    Krishnamoorthy, V., Weick, M. & Gollisch, T. Sensitivity to image recurrence across eye-movement-like image transitions through local serial inhibition in the retina. eLife 6, e22431 (2017).

    PubMed  PubMed Central  Google Scholar 

  95. 95.

    Franke, F. et al. Structures of neural correlation and how they favor coding. Neuron 89, 409–422 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  96. 96.

    Zylberberg, J., Cafaro, J., Turner, M. H., Shea-Brown, E. & Rieke, F. Direction-selective circuits shape noise to ensure a precise population code. Neuron 89, 369–383 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  97. 97.

    Rodieck, R. W. The First Steps in Seeing. (Oxford Press, Oxford, UK, 1998).

    Google Scholar 

  98. 98.

    Hecht, S. & Verrijp, C. D. Intermittent stimulation by light III. The relation between intensity and critical fusion frequency for different retinal locations. J. Gen. Physiol. 17, 251–268 (1933).

    CAS  PubMed  PubMed Central  Google Scholar 

  99. 99.

    Sinha, R. et al. Cellular and circuit mechanisms shaping the perceptual properties of the primate fovea. Cell 168, 413–426.e12 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  100. 100.

    Solomon, S. G., Martin, P. R., White, A. J. R., Rüttiger, L. & Lee, B. B. Modulation sensitivity of ganglion cells in peripheral retina of macaque. Vision Res. 42, 2893–2898 (2002).

    PubMed  Google Scholar 

  101. 101.

    Oyster, C. W. & Barlow, H. B. Direction-selective units in rabbit retina: distribution of preferred directions. Science 155, 841–842 (1967).

    CAS  PubMed  Google Scholar 

  102. 102.

    Hughes, S. et al. Signalling by melanopsin (OPN4) expressing photosensitive retinal ganglion cells. Eye (Lond.) 30, 247–254 (2016).

    CAS  Google Scholar 

  103. 103.

    Hausen, K. & Egelhaaf, M. in Facets of Vision (eds. Stavenga, D.G. & Hardie, R.C.) 391–424 (Springer, London, UK, 1989).

  104. 104.

    O’Carroll, D. C., Bidwell, N. J., Laughlin, S. B. & Warrant, E. J. Insect motion detectors matched to visual ecology. Nature 382, 63–66 (1996).

    PubMed  Google Scholar 

  105. 105.

    Krapp, H. G. & Hengstenberg, R. Estimation of self-motion by optic flow processing in single visual interneurons. Nature 384, 463–466 (1996).

    CAS  PubMed  Google Scholar 

  106. 106.

    Longden, K. D., Wicklein, M., Hardcastle, B. J., Huston, S. J. & Krapp, H. G. Spike burst coding of translatory optic flow and depth from motion in the fly visual system. Curr. Biol. 27, 3225–3236.e3 (2017).

    CAS  PubMed  Google Scholar 

  107. 107.

    Franz, M. O. & Krapp, H. G. Wide-field, motion-sensitive neurons and matched filters for optic flow fields. Biol. Cybern. 83, 185–197 (2000).

    CAS  PubMed  Google Scholar 

  108. 108.

    Kohn, J. R., Heath, S. L. & Behnia, R. Eyes matched to the prize: the state of matched filters in insect visual circuits. Front. Neural Circuits 12, 26 (2018).

    PubMed  PubMed Central  Google Scholar 

  109. 109.

    Sabbah, S. et al. A retinal code for motion along the gravitational and body axes. Nature 546, 492–497 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  110. 110.

    Gauvain, G. & Murphy, G. J. Projection-specific characteristics of retinal input to the brain. J. Neurosci. 35, 6575–6583 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  111. 111.

    Burge, J. & Jaini, P. Accuracy maximization analysis for sensory-perceptual tasks: computational improvements, filter robustness, and coding advantages for scaled additive noise. PLoS Comput. Biol. 13, e1005281 (2017).

    PubMed  PubMed Central  Google Scholar 

  112. 112.

    Geisler, W. S., Najemnik, J. & Ing, A. D. Optimal stimulus encoders for natural tasks. J. Vis. 9, 1–16 (2009).

    PubMed  Google Scholar 

  113. 113.

    Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1–9 (2012).

    Google Scholar 

  114. 114.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  115. 115.

    Maheswaranathan, N. et al. Deep learning models reveal internal structure and diverse computations in the retina under natural scenes. Preprint at bioRxiv (2018).

  116. 116.

    Kriegeskorte, N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu. Rev. Vis. Sci. 1, 417–446 (2015).

    PubMed  Google Scholar 

  117. 117.

    Yamins, D. L. K. & DiCarlo, J. J. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 19, 356–365 (2016).

    CAS  PubMed  Google Scholar 

  118. 118.

    Cadena, S. A. et al. Deep convolutional models improve predictions of macaque V1 responses to natural images. Preprint at bioRxiv (2017).

  119. 119.

    Cichy, R. M., Khosla, A., Pantazis, D., Torralba, A. & Oliva, A. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 6, 27755 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  120. 120.

    Pospisil, D., Pasupathy, A. & Bair, W. Comparing the brain’s representation of shape to that of a deep convolutional neural network. Proc. 9th EAI Int. Conf. Bio-inspired Inf. Commun. Technol. (formerly BIONETICS) 516–523 (2016).

  121. 121.

    Young, M. P. & Yamane, S. Sparse population coding of faces in the inferotemporal cortex. Science 256, 1327–1331 (1992).

    CAS  PubMed  Google Scholar 

  122. 122.

    Fukushima, K. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193–202 (1980).

    CAS  PubMed  Google Scholar 

  123. 123.

    Riesenhuber, M. & Poggio, T. Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999).

    CAS  PubMed  Google Scholar 

  124. 124.

    Razavian, A. S., Azizpour, H., Sullivan, J. & Carlsson, S. CNN features off-the-shelf: an astounding baseline for recognition. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR) Workshops 806–813 (2014).

  125. 125.

    Szegedy, C. et al. Intriguing properties of neural networks. Preprint at arXiv (2014).

  126. 126.

    Ullman, S., Assif, L., Fetaya, E. & Harari, D. Atoms of recognition in human and computer vision. P roc. Natl Acad. Sci. USA 113, 2744–2749 (2016).

    CAS  Google Scholar 

  127. 127.

    Goodfellow, I.J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. Preprint at arXiv (2015).

  128. 128.

    Nayebi, A. & Ganguli, S. Biologically inspired protection of deep networks from adversarial attacks Preprint at arXiv (2017).

  129. 129.

    Brendel, W. & Bethge, M. Comment on ‘Biologically inspired protection of deep networks from adversarial attacks’. Preprint at arXiv (2017).

  130. 130.

    Nishimoto, S. & Gallant, J. L. A three-dimensional spatiotemporal receptive field model explains responses of area MT neurons to naturalistic movies. J. Neurosci. 31, 14551–14564 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  131. 131.

    Berardino, A., Ballé, J., Laparra, V. & Simoncelli, E.P. Eigen-distortions of hierarchical representations. Preprint at arXiv (2017).

  132. 132.

    Han, S. & Vasconcelos, N. Object recognition with hierarchical discriminant saliency networks. Front. Comput. Neurosci. 8, 109 (2014).

    PubMed  PubMed Central  Google Scholar 

  133. 133.

    Ren, M., Liao, R., Urtasun, R., Sinz, F. H. & Zemel, R. S. Normalizing the normalizers: comparing and extending network normalization schemes. Preprint at arXiv (2017).

  134. 134.

    Sanchez Giraldo, L.G., Schwartz, O. Integrating flexible normalization into mid-level representations of deep convolutional neural networks. Preprint at arXiv (2018).

  135. 135.

    Spoerer, C. J., McClure, P. & Kriegeskorte, N. Recurrent convolutional neural networks: A better model of biological object recognition. Front. Psychol. 8, 1551 (2017).

    PubMed  PubMed Central  Google Scholar 

  136. 136.

    Shwartz-Ziv, R. & Tishby, N. Opening the black box of deep neural networks via information. Preprint at arXiv (2017).

  137. 137.

    Chalk, M., Marre, O. & Tkačik, G. Toward a unified theory of efficient, predictive, and sparse coding. Proc. Natl Acad. Sci. USA 115, 186–191 (2018).

    CAS  PubMed  Google Scholar 

  138. 138.

    Sederberg, A. J., MacLean, J. N. & Palmer, S. E. Learning to make external sensory stimulus predictions using internal correlations in populations of neurons. Proc. Natl Acad. Sci. USA 115, 1105–1110 (2018).

    CAS  PubMed  Google Scholar 

  139. 139.

    Kuleshov, V. & Ermon, S. Deep hybrid models: bridging discriminative and generative approaches. Uncertainty in AI (2017).

  140. 140.

    Park, I. M. & Pillow, J. W. Bayesian efficient coding. Preprint at bioRxiv (2017).

  141. 141.

    Ballé, J., Laparra, V. & Simoncelli, E.P. End-to-end optimized image compression. Preprint at arXiv (2017).

  142. 142.

    Hirayama, J., Hyvärinen, A. & Kawanabe, M. SPLICE: fully tractable hierarchical extension of ICA with pooling. Proc. Mach. Learn. Res. 70, 1491–1500 (2017).

    Google Scholar 

  143. 143.

    Scholte, H. S., Losch, M. M., Ramakrishnan, K., de Haan, E. H. F. & Bohte, S. M. Visual pathways from the perspective of cost functions and multi-task deep neural networks. Cortex 98, 249–261 (2018).

    PubMed  Google Scholar 

  144. 144.

    Kell, A. J. E., Yamins, D. L. K., Shook, E. N., Norman-Haignere, S. V. & McDermott, J. H. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron 98, 630–644.e16 (2018).

    CAS  PubMed  Google Scholar 

  145. 145.

    Zhuang, C. D. Y. Using multiple optimization tasks to improve deep neural network models of higher ventral cortex. J.Vis. 18, 905 (2018).

    Google Scholar 

  146. 146.

    Van Der Linde, I., Rajashekar, U., Bovik, A. C. & Cormack, L. K. DOVES: a database of visual eye movements. Spat. Vis. 22, 161–177 (2009).

    Google Scholar 

  147. 147.

    Rucci, M. & Victor, J. D. The unsteady eye: an information-processing stage, not a bug. Trends Neurosci. 38, 195–206 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  148. 148.

    Thomson, M. G. Visual coding and the phase structure of natural scenes. Network 10, 123–132 (1999).

    CAS  PubMed  Google Scholar 

Download references


We thank H. Krapp, D. Pospisil, and J. Shlens for helpful feedback on an earlier version of this review. H. Krapp very generously provided the data and schematic shown in Fig. 4a,b. This work was supported by NIH grants F31-EY026288 (to M.H.T.), EY028542 (to F.R.), and a National Science Foundation Grant 1715475 (to O.S.).

Author information



Corresponding author

Correspondence to Fred Rieke.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Turner, M.H., Sanchez Giraldo, L.G., Schwartz, O. et al. Stimulus- and goal-oriented frameworks for understanding natural vision. Nat Neurosci 22, 15–24 (2019).

Download citation

Further reading


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing