The topic in brief

  • Virtual-reality (VR) systems simulate real-world inputs to one or more of an organism's sensory neural circuits, then measure the subject's actions and apply updates to sensory stimuli in response.

  • In most rodent set-ups, the animal receives visual information from an immersive screen that spans its field of vision. The animal's movements control the visual flow, thereby replicating the sensory–motor coupling of the real world.

  • Typically, movement is restricted by fixing the rodent's head in position; this allows precise measurements of neural activity to be taken and correlated with motor actions in animals that are awake, rather than anaesthetized (Fig. 1).

    Figure 1: A mouse explores a virtual world.
    figure 1

    In most typical virtual-reality experiments, a mouse is head-fixed above a ball. Its legs are free, allowing it to move the ball in all directions. By moving the ball, the mouse navigates around a virtual world that is projected onto a 270° doughnut-shaped screen in front of it. Head fixing enables neural activity to be measured and correlated with the motor actions that drive movement.

  • Many researchers think that VR is a valuable tool for studying both navigation and sensory systems.

  • However, a body of work1,2,3 indicates that the way in which mice navigate in real and virtual worlds is different.

The best of both worlds

Matthias Minderer & Christopher D. Harvey

Virtual reality is a valuable tool for understanding neural function because it combines precise experimental control with natural behaviours. It allows experiments that are not possible using real-world approaches. As such, it has increased our understanding of neural processes in subjects ranging from humans to insects.

What are the experimental benefits of VR? First, the technology allows researchers to define explicitly and exhaustively the sensory cues that carry information about the virtual world. In real-world experiments, it is not possible to control all sensory cues. For example, when studying the contribution of visual cues to navigation, confounding information could be provided by unmeasured smells, sounds, textures and vestibular stimuli (internal information about balance and spatial orientation). VR offers the means to add or remove sensory cues to test the contribution of each one to a neural code, and to build up a 'minimal' set of stimuli needed to produce a given behaviour or neural activity pattern.

A second benefit comes from the ability to redefine the laws that link the subject's actions to changes in its world. When an animal explores the real world, it is difficult to disentangle which neural responses are attributable to the animal's actions and which are caused by sensory stimuli, because the two are rigidly linked by the laws of physics. In VR, this link can be modified in informative ways — sensory and motor features can be dissociated by changing the gain or lag between an action and a subsequent update of the virtual environment, or be made independent of one another for brief periods. Sensory and motor variables can therefore be separated while allowing the subject to interact naturally and actively with the sensory world.

Third, VR increases the range of tools available to measure neural activity. Because the subject is usually constrained, techniques can be applied that are either not possible or of poorer quality in freely moving subjects. These include functional magnetic resonance imaging, high-resolution fluorescence imaging and intracellular single-neuron electrophysiology.

VR experiments can be designed to create informative differences between neural function in real and virtual worlds.

Many studies have shown that animals can solve navigational tasks in virtual worlds4,5,6. But the aspects of navigation that can be studied in VR depend on the experimental set-up — for instance, the number of sensory cues simulated, the degree of sensory immersion and how naturally the subject interacts with the virtual world. In VR experiments that provide visual inputs and allow body rotations to trigger vestibular signals, neural activity patterns during navigation are consistent with those measured in real-world experiments5. Furthermore, studies1,2,3 that remove key sensory inputs such as vestibular stimuli reveal which aspects of navigational neural activity depend on vestibular input and which can be supported by visual cues alone. Therefore, VR can recapitulate neural activity in real environments, and VR experiments can be designed to create informative differences between neural function in real and virtual worlds.

Overall, VR has yielded many insights into sensorimotor integration, decision-making and navigation6. But it is important to remember that, like all reductionist approaches, VR requires a trade-off between improved experimental accessibility and consistency with natural processes — the optimum set-up depends on the research question being asked. For instance, in studies of sensorimotor integration, it is crucial to dissociate sensory and motor variables. In navigation studies, convincing simulations are needed to probe the subject's internal model of the physical world. VR must be used judiciously, so that its implementation matches the needs of the question. Of course, this requirement applies to all experimental tools and is not specific to VR.

In summary, we consider VR as bridging the gap between natural behaviour and conventional reductionist approaches; this is a major step forward in the study of complex behaviours in many species. As the community of VR users grows and commercial VR technologies expand, we expect the range of applications for VR to continue to grow, enhancing our understanding of neural function.

A world away from reality

Flavio Donato & Edvard I. Moser

Technology that involves VR has obvious advantages for studies of simple sensori-motor computations, in which a defined set of inputs, such as those corresponding to an animal's movement, is associated linearly with neural output. However, some pressing concerns are raised when VR technology is used to study higher-order computations such as spatial navigation. Navigation reflects the integration of many sensory inputs. The resulting outputs are not linearly related to sensory perception, but rather express cognitive abstractions.

Goal-driven navigation relies on several cell types in the brain, including place cells (which fire when an animal is in a particular location), grid cells (which fire at periodically spaced positions across the entire environment) and border cells (which fire selectively along local borders)7,8. By fixing an animal's head in place, investigators can monitor the activity of these neurons at high resolution while the animal runs between specific locations in virtual space. But do animals navigate in the same way in VR as in real life?

Navigating in the real world is a multisensory process that integrates visual, olfactory and tactile stimuli with vestibular information and information about the activity of moving body parts. But in VR, these elements are often not coordinated, and the animal's sensory experience is largely reduced to a combination of visual inputs and locomotion, which are easy to control. The animal must overcome discrepancies between visual cues that follow movements and cues that are static in VR, such as smell or head direction. Conflicts between movement and sensory inputs might alter the activity of space-encoding neurons to reflect only information coordinated to motion, such as visually changing landmarks and accumulated distance1,2, at the expense of other cues. This could lead researchers to overestimate the contribution of visual inputs to navigation and, in the most extreme cases, might lead to the loss of computation altogether2.

A particular concern is whether the loss of vestibular input that accompanies movement restriction affects animals' computation of their position.

A particular concern is whether the loss of vestibular input that accompanies movement restriction affects animals' computation of their position. A continuous mismatch between vestibular and visual inputs might not be detrimental in linear environments. When an animal runs in a straight line, visual inputs are repeatedly and stereotypically paired to the same locomotor information, which may, with continued training, allow the animal to compensate for mismatches. However, such a mismatch might have a greater effect in two-dimensional or 3D VR arenas.

Indeed, if movement is unrestrained, the position-coding activity of place and grid cells is similar in 2D VR to that in the real world5. In stark contrast to this, position coding is disrupted and a new coding emerges when body movement is restricted, or if the head is fixed2. These data cast doubt on whether the way animals interpret 2D or 3D space can ever be understood using VR under conditions of head or body restriction. Strategies that compensate for the loss of synchrony between vestibular information and the animal's behaviour would be a welcome advance.

Finally, are all types of position-coding cell represented in VR-based navigation? It is unclear if and how border, speed and head-direction cells are activated when movement is restricted. Moreover, cells might not fire in the same way in the two worlds. In one analysis2, 60% of the place cells activated in the real world were silent in VR. Whereas studies typically check that VR-activated cells are represented in real-world sessions, the opposite direction of investigation lags behind — although there are exceptions to this3.

More than 40 years ago, the neuroscientist John O'Keefe changed our understanding of the physiology of navigation by studying rats freely foraging for food. By allowing the natural sensory–motor interactions required for the formation of an internal representation of space, O'Keefe discovered the first element of the 'cognitive map' — the place cell9. VR can extend that ecological approach to higher cognitive functions. But to do so successfully, the technology needs further development and validation.Footnote 1