News & Views | Published:


Synapses get together for vision

Nature volume 547, pages 408410 (27 July 2017) | Download Citation

A sophisticated analysis in mice of how inputs to neurons from other neurons are distributed across individual cells of the brain's visual cortex provides information about how mammalian vision is processed. See Letter p.449

A typical pyramidal neuron in the brain's visual cortex receives thousands of excitatory signals from other neurons, transmitted across connections called synapses. These inputs from presynaptic neurons end on tiny protrusions called spines on the postsynaptic neuron's tree-like processes (dendrites). In principle, when a sufficient number of inputs are active at the same time, the postsynaptic neuron will fire. But not all inputs are equal: it matters where on the dendritic tree an input is located, and whether it is activated by similar stimuli to those that activate its neighbours, allowing simultaneously active inputs to team up for greater impact1. On page 449, Iacaruso et al.2 describe how inputs activated by stimuli at different locations in visual space are mapped onto the dendrites of neurons in the visual cortex.

Neurons in the visual cortex respond to specific attributes of visual stimuli, including particular contour orientations, the presence or absence of light, or a combination of factors. Importantly, each neuron responds only to visual stimuli in a small, defined region of the scene, known as that neuron's receptive field. The visual cortex as a whole contains a systematic representation of the visual scene, such that neighbouring neurons have receptive fields close to or overlapping one another, and neurons farther apart have distant receptive fields. One aspect of neuronal organization in the visual cortex that has remained unclear is whether synaptic inputs that are activated together cluster on dendrites. Previous studies3,4,5 have reached divergent conclusions.

Iacaruso et al.2 took a fresh look at this issue. They used small black-and-white squares as stimuli that they presented at different locations in a mouse's field of view to activate synaptic inputs from presynaptic neurons. They then painstakingly mapped a fraction of the active inputs to individual postsynaptic neurons. This was achieved by measuring brief increases in calcium-ion concentration, which occur in response to input activation, in dendritic spines. The researchers observed that spines located close to one another on a dendrite did tend to respond to stimuli in overlapping regions of visual space (Fig. 1). However, not all stimulus features were equally relevant — clustering was not determined by the orientation of contours, in agreement with earlier results in mice3,4.

Figure 1: Mapping visual space onto individual neurons.
Figure 1

In a whole scene, the field of view is the portion processed by the eyes at any one time. Neurons in the brain's visual cortex (such as that in pink) are activated by light–dark boundaries of particular orientations in a small region of the field of view, known as their receptive field (orientation preference is indicated by the dotted line). Neurons receive information from others (blue) via small protrusions called spines on branched processes called dendrites. Iacaruso et al.2 report that the arrangement of these inputs across dendrites in mice depends on the stimuli that activate the neurons involved — both those that send inputs and the postsynaptic neuron that receives them. Inputs from neurons that have overlapping receptive fields cluster on dendrites, regardless of their orientation preference. Inputs whose receptive field lies in regions of the scene close to that of the postsynaptic neuron tend to terminate close to the cell body, whereas those whose receptive fields lie in a distant part of the scene terminate farther from the cell body. These remote inputs tend to share the orientation preference of the postsynaptic neuron, and their receptive fields lie along an axis in the field of view formed by that orientation.

Next, the authors focused on spines that respond to stimuli from distant regions of the visual scene. Although neurons respond best to stimuli in their receptive field, their activity can be modulated by visual stimulation elsewhere in the visual scene6. These modulatory inputs are thought to provide contextual information, which might help in perceptual grouping — the process that determines which pieces of highly localized information belong together when objects extend over large regions of the visual scene, spanning many receptive fields6,7.

Iacaruso et al. found that long-range synaptic inputs from presynaptic neurons that respond to distal regions of the visual field are not random. Instead, the presynaptic neuron is frequently tuned to the same orientation as the postsynaptic neuron, and responds to stimuli located in a part of the visual field that is 'co-linear' with the preferred orientation of the postsynaptic neuron. For instance, if a postsynaptic neuron prefers horizontal contours, the presynaptic neuron is activated by horizontal stimuli positioned in the visual scene along a horizontal axis (a co-linear axis) from the receptive field of the postsynaptic neuron. Anatomical8 and functional9 data from other animal models have suggested a similar set-up for long-range connectivity in the visual cortex, but Iacaruso et al. are the first to demonstrate this complex connectivity directly.

Finally, the authors observed that nearby and remote presynaptic neurons terminate at different dendritic locations on the postsynaptic neuron — remote, long-range inputs target distal dendrites, whereas inputs from cells that have receptive fields next to that of the postsynaptic neuron are located closer to the cell body (Fig. 1). This finding further supports the idea that visual space is mapped onto the dendrites in an organized way.

Iacaruso and colleagues' data set provides a starting point for further investigation of the relationship between the spatial arrangement of synaptic inputs and neuronal outputs, in the visual cortex and beyond. However, many questions remain. For instance, how does the clustering of similar inputs shape the way in which a neuron integrates information from all of its inputs? Does co-activation of neighbouring, spatially correlated inputs lead to dendritic amplification of like signals, which has been suggested to enhance orientation selectivity in the visual cortex in ferrets5 and mice10? Do the synaptic inputs that provide information about remote, co-oriented and co-linear regions of the visual scene contribute to Gestalt phenomena — the rules by which our brain attempts to parse meaning from our perceptions of the surrounding world, for example by instantly identifying figures in otherwise randomly distributed lines7?

Answering these questions will probably require a drastic increase in the throughput of the difficult experiments undertaken by Iacaruso and co-workers. Individual cells must be stimulated with a wide array of more-complex visual stimuli, and activity recorded from many more spines and neurons. Ideally, these analyses would be performed in awake animals; mice were lightly anaesthetized in the current study, prohibiting analysis of neuronal activity during behaviour.

A final question is how such intricate connectivity arises during brain development. Some evidence11 suggests that the well-established theory that neurons that fire together wire together might come into play here, causing the clustering of co-activated synaptic inputs during development. Alternatively, dendritic organization might be the result of activity-independent processes. Indeed, sister cells derived from the same neuronal progenitor in the visual cortex are more likely than unrelated neurons to be connected and have similar orientation preferences12,13. Insight could come from classic experiments involving animals reared in particular visual environments (such as complete darkness or high-contrast contours), together with functional input mapping as used by Iacaruso and colleagues.



  1. 1.

    & Annu. Rev. Neurosci. 28, 503–532 (2005).

  2. 2.

    , & Nature 547, 449–452 (2017).

  3. 3.

    et al. Nature 499, 295–300 (2013).

  4. 4.

    , , & Nature 464, 1307–1312 (2010).

  5. 5.

    , , & Nature Neurosci. 19, 1003–1009 (2016).

  6. 6.

    & Neuron 75, 250–264 (2012).

  7. 7.

    , & Vision Res. 33, 173–193 (1993).

  8. 8.

    , & & J. Neurosci. 17, 2112–2127 (1997).

  9. 9.

    & J. Neurosci. 11, 2995–3007 (1991).

  10. 10.

    , , & Nature 503, 115–120 (2013).

  11. 11.

    , , & Neuron 87, 399–410 (2015).

  12. 12.

    et al. Nature 486, 118–121 (2012).

  13. 13.

    et al. Nature 496, 96–100 (2013).

Download references

Author information


  1. Tobias Rose and Mark Hübener are at the Max Planck Institute of Neurobiology, 82152 Martinsried, Germany.

    • Tobias Rose
    •  & Mark Hübener


  1. Search for Tobias Rose in:

  2. Search for Mark Hübener in:

Corresponding authors

Correspondence to Tobias Rose or Mark Hübener.

About this article

Publication history




By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Newsletter Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing