Rendering the brain-behavior link visible

Journal name:
Nature Methods
Volume:
9,
Pages:
953–958
Year published:
DOI:
doi:10.1038/nmeth.2183
Published online
Corrected online

In vivo imaging scientists broadcast from inside the brains of moving animals.

Main

A message on an Alzheimer's disease online board seeks advice about a 75-year-old aunt who incessantly yells at her 80-year-old husband without whom she cannot move about. The aunt has stopped eating and refuses a doctor's visit. Behavioral changes in patients with Alzheimer's disease—memory loss, impeded speech or motor skills, irritability, and other personality changes—leave loved ones clamoring for ways to lessen suffering. For now, medications can only slow the disease progression in some cases, not cure it. Neuroscientists are convinced that patients will benefit from new ways of peering into the brain that underlie behavior and the disturbance that brain diseases create.

To directly monitor the activity of neurons in the brains of awake, active animals as they move, react and remember, scientists have to create special experimental environments in which they can apply an evolving array of activity indicators: for example, calcium, whose concentration rises when neurons fire (Box 1). The animals may be moving or be partially restrained, and the imaging systems, based on modalities such as one- and two-photon microscopy1, are tailored to the animal in a variety of ways.

Box 1: Watching waves move, hearing soft voices

Researchers built the first system to image superficial cortex regions in behaving (for rodents: chewing, resting and scampering) rats in 2001. Princeton University scientist David Tank, who is in both the molecular biology and physics departments, Winfred Denk, who directs the Max Planck Institute for Medical Research, and their colleagues videotaped the behaving rats and imaged brain activity using a head-mounted miniaturized two-photon fluorescence microscope.

The microscope, with an objective, motor, collimator and emission filter, was mounted on the animal and connected to an excitation laser and optical apparatus through a tethered optical fiber that delivered two-photon excitation pulses. The team imaged blood vessels and applied calcium indicators, resolving the activity of individual neurons.

Tank and his team later developed ways to have a mouse star in its own movie. As the animal runs on a spherical treadmill, its movements change the virtual-reality environment around it2. While the mouse moves, the scientists capture the firing patterns and membrane-potential dynamics of a type of neuron in the hippocampus called place cells, which help animals orient themselves. This experimental approach to address unresolved questions about brain function offers a “powerful example of what will be learned in the decades to come,” University of California, San Diego, neuroscientist Douglas Nitz noted about the place-cell study in 2009. The research also shows “that it is not impossible to examine brain correlates of higher cognitive processes and at the same time identify their underlying causes at the cellular level.” His words still hold true today, he says.

At Princeton University, a running mouse changes its virtual-reality environment. Reprinted from ref. 12.

Without a view inside an animal's brain as it behaves, scientists are unable to ask “the most important questions,” says Harvard University neuroscientist Florian Engert. “This is why David Tank's mouse on a trackball was such an enormous breakthrough.” “It's incredible, what he's doing,” says Paul Katz, a neuroscientist at Georgia State University. Science needs conceptual and technical leaps. “Every technical breakthrough leads to new concepts,” says Katz.

Paul Katz plans to use optical imaging to study circuits across sea slug species.

A tiny microscope

A separate technology leap was the development of miniaturized fluorescence microscopes that mice or rats can wear as a high-tech hat. These instruments record neuronal activity while the animal scurries about.

One such device was developed in the lab of Mark Schnitzer, who is a Howard Hughes Medical Institute investigator in the biology and applied physics departments at Stanford University, in collaboration with Abbas El Gamal and Kunal Ghosh from the electrical engineering department. It weighs 1.9 grams, less than a teaspoon of flour. The instrument allowed the team to image nine cerebellar microzones, recording changes in capillary diameter and blood flow as well as calcium spiking in over 200 Purkinje neurons as the animal rested, walked around its cage or ran in its exercise wheel3.

Dan Stober, Stanford News Service

Mark Schnitzer wants to study how large ensembles of neurons encode behavior.

His group is now using the miniaturized microscope to study how large ensembles of neurons encode behavior. “At this point we have really transitioned a bit more from technology development into taking advantage of the new opportunities in systems neuroscience,” Schnitzer says. The device is now being commercialized (Box 2).

Box 2: Eying a future market

Another approach in his lab involves inserting microendoscopes into the brain and using them for one- and two-photon fluorescence imaging techniques. Schnitzer's team has managed to image a mouse's deep-lying neurons4. Although prior longitudinal studies on brain areas closer to the surface exist, this work imaged neurons deeper in the hippocampusover longer time periods. Now he can combine this approach with the integrated microscope. “This opens the prospect of looking at neural dynamics across large populations of individual neurons,” says Schnitzer.

Tank's group applies two-photon microscopy; Schnitzer chooses the technique on the basis of whether the experiment calls for animals that are freely moving or head restrained. Schnitzer says that one-photon images can be less crisp than two-photon ones, yet the former technique can sample many neurons and obtain fast frame rates. Despite its slightly compromised resolution and optical contrast, the greater collection of photons in one-photon imaging means that the ability to detect calcium signaling in cells close to the objective lens tends to be “about the same,” he explains. “I see a role for both of them.”

Dan Stober, Stanford News Service

This microscope weighs less than a teaspoon of flour.

Know thy neurons

The ambition to understand the link between behavior and the activity of neural circuits dates back to locomotion and learning studies in the 1950s probing the components of circuits in insects and invertebrates with electrodes, Katz explains5, 6, 7, 8. Over time, the electrophysiological approach has expanded in many ways. In his summer course on neural systems and behavior at the Marine Biological Laboratory, graduate students and postdocs explore tagging and imaging approaches across many species to place neural circuits in the context of the behaving animal.

No explicit walls separate research groups working on different species, but there are 'islands', says Engert, who studies zebrafish. There is a “standard set of experimental tricks” to tease apart contributions of sensory input, internal state and behavioral output across species, says Vivek Jayaraman, a neuroscientist at Howard Hughes Medical Institute's Janelia Farm who studies Drosophila. Beyond working out circuitry more generally, “there's a level at which you have to get the details right in your system, and there's no shortcut there.”

Malathi Thothathiri

Vivek Jayaraman is studying neural circuits in behaving flies.

Although Katz now applies standard electrophysiology, he plans to use optical imaging to see differences and commonalities in circuits across sea-slug species. Not only do sea slugs have large neurons, but each neuron is individually identifiable from animal to animal. “That means you can work out a circuit with absolute fidelity,” he says. Flies offer a similar fidelity. “The slugs have nothing on us there,” jokes Jayaraman about a possible superiority of slugs over flies, but he adds that fruit flies do have tinier neurons. He has used two-photon microscopy and genetically encoded calcium indicators to image neuronal activity in active fruit flies9. Genetic tools allow fruit fly scientists to target specific subsets of neurons in fly after fly, making the neurons identifiable. The neurons also “are lit up like a Christmas tree if you put GFP in those lines,” Jayaraman says.

In Jayarman's lab, tethered fruit flies are exposed to visual stimuli while they walk on a trackball or fly in place. The fly's head is held fixed under a microscope so that the cells in its brain can be imaged as the animal walks or flies. The researchers recorded neuronal signaling from motion-sensitive neurons of the optic lobe using two-photon imaging and a calcium sensor, GCaMP3.

To encourage information exchange about physiological techniques for tethered, active fruit flies, his team runs the website FlyFizz (http://openwiki.janelia.org/wiki/display/flyfizz/Home/) with the Card and Reiser labs at Janelia. Even with all the information at the ready, “it still takes a little while to get it all right in your own lab,” he says, adding that his postdoc Johannes Seelig has a special knack for such experiments.

Despite constant tweaking, no longer does every new experiment feel like a pilot, Jayaraman says. Technology development has stabilized enough even for labs that do not regularly work on fruit fly physiology to try it. “I think it's an attractive technology for a lot of people to get into circuits from behavior.” Researchers adapt the systems to record from different brain areas in flies or with differing sensory stimulation. “It's not just about doing cool technical things and having technical toys,” he says. “The whole point of this is to get to the circuits.”

Eugenia Chiappe

Drosophila can be imaged as the animals walk or fly. Reprinted from ref. 9.

Some might say the Caenorhabditis elegans community got a big head start to get to the circuits with a team who delivered the wiring diagram of the animal's 302 neurons in 1986. C. elegans already offers the advantage of being transparent and thin, but this diagram identifies every individual neuron, says William Schaefer, a neuroscientist at the University of Cambridge. In studying function in wild-type and mutant animals, one can record repeatedly from exactly the same neuron in a known position in a known circuitry and in an unlimited number of animals, he says. “Even for Drosophila, the next-best characterized nervous system, that sort of universal precision is a long way off.”

Many imaging systems exist for C. elegans. In Schaefer's lab, neuronal signals have been recorded from freely behaving worms with an automated system that tracks the fluorescent neuron and moves the stage to keep the spot close to the center of the viewing field, he says.

“With newer high-resolution cameras, it is also possible to dispense with the tracking stage and just use a magnification low enough that the worm always stays within the field of view,” Schaefer says. Yet the spatial resolution can suffer, and multineuron imaging is difficult, requiring more sensitive indicators to harvest enough signal at the lower magnification. “But you can image worms in more interesting environments,” he says.

Knowledge of the worm's circuitry has helped researchers understand the neural basis of behaviors such as responding to temperature. But the animal's overall neural circuit simplicity can be a drawback because worm neural circuits can differ from those of larger animals that are not constrained by neuron number, Schaefer says.

Swimming in the Matrix

Monitoring all the brain's neurons at single-cell resolution in a behaving animal sounds like a wide-eyed dream. Vertebrate heads and their opaque packaging shut out inquisitive eyes. With one exception: “There's nothing that beats a larval zebrafish,” says Engert; the embryo's size and translucence make this vertebrate well suited for neuroscience studies.

Misha B. Ahrens

To create whole-brain maps of neural activity, recorded data are mapped to reference images.

Engert has pursued two imaging approaches in parallel: bioluminescence and two-photon microscopy. Illuminating neurons via GFP linked to the jellyfish protein Aequorin, the team developed an inexpensive way to use transgenic fish and a calcium indicator expressed in a small number of genetically specified neurons10. This simple setup is “appealing in its simplicity,” he says. “It's almost like mind-reading at a distance.” Neuronal activity brightens the signal that can be recorded via a light detector and photomultiplier tube. GFP-Aequorin needs no excitation, and its activity-based glow shows the activity of neural populations in behaving zebrafish over several days. Although this type of neuroluminescence cannot be used to image individual cells or spikes, the team believes that efficiency boosts to light detectors will improve data capture capabilities.

Misha B. Ahrens

Fictive swimming in larval zebrafish allows reconstruction of motor-related brain activity.

Separately, using two-photon microscopy and a time series of images, Engert, postdoctoral fellow Misha Ahrens and their colleagues managed wide cellular-level brain imaging11. They mapped the activity of neurons in four brain areas and correlated them with movement types, showing adaptive motor control on a cellular level.

In the experiments, the fish is not moving but rather doing what Engert calls 'fictive swimming'. The fish do not swim freely because they are paralyzed and held in place. This approach is a combination of a Hollywood blockbuster movie concept and a historic technique. “We put him in the Matrix,” Engert says, referring to the film. In The Matrix, people behave while paralyzed, but “they don't even know that they are paralyzed.” The historical method involves recording from motor neurons. Electrodes capture signals for tail flicks running through motor neurons in the spine. These signals are a proxy for intended behavior: attempted twists and turns in the water as the fish reacts to a virtual environment.

The fictive swimming output is processed computationally and fed back to screens surrounding the animal that generate a virtual environment through which the fish fictively swims. At the same time, two-photon microscopy delivers a neuronal readout. The experimental setup was eight years in the making, including construction of ten two-photon microscopes in prototypes. Although the build-your-own in vivo imaging instructions are in the paper, it is not a 'turnkey system', he says. Besides needing dedication, scientists have to develop decoding algorithms for the fictive swims and reduce data dimensionality.

Engert downplays the biological insights of this study, instead highlighting the importance of finding a method to “have the animal control this computer game that it is basically playing, simply by virtue of neuronal activity.”

Justin Ide, Harvard University

Florian Engert put zebrafish in the Matrix.

Weighing his two imaging approaches, both of which he continues to use, Engert says that the Aequorin system, with freely swimming fish, delivers data from one neuron at a time or bulk traces from multiple neurons, is easily set up and less invasive. Two-photon imaging coupled with fictive swimming “gives us everything that the Aequorin gives us, but more,” he says.

Engert says that the zebrafish may not be swimming, but these experiments shed light on the neural circuitry underlying behavior. Jayaraman echoes Engert's views on behavior in these experiments. Referring to flies, he says “it's not the real thing, undoubtedly not, but for specific experiments, I think it serves quite well.” Experiments will get “trickier and trickier” as researchers strive to study more complex behaviors. But he is optimistic that the community will “be successful at getting a sufficient approximation to the real thing, and that we will learn something important about the neural circuits and how they work in those behaviors.”

Change history

Corrected online 11 October 2011
In the version of this article initially published, the caption of Vivek Jayaraman's photo misrepresents the focus of his work; and Misha Ahrens, a postdoctoral fellow, is erroneously described as a graduate student. The errors have been corrected in the HTML and PDF versions of the article.

References

  1. Denk, W., Strickler, J.H. & Webb, W.W. Science 248, 7376 (1990).
  2. Harvey, C.D., Collman, F., Dombeck, D.A. & Tank, D.W. Nature 461, 941946 (2009).
  3. Ghosh, K.K. et al. Nat. Methods 8, 871878 (2011).
  4. Barretto, R.P. et al. Nat. Med. 17, 223228 (2011).
  5. Kandel, E.R. J. Neurosci. 29, 1274812756 (2009).
  6. Böhm, H., Schildberger, K. & Huber, F. J. Exp. Biol. 159, 235248 (1991).
  7. Buchner, E. Biol. Cybern. 24, 85101 (1976).
  8. Newcomb, J.M., Sakurai, A., Lillvis, J.L., Gunaratne, C.A. & Katz, P.S. Proc. Natl. Acad. Sci. USA 109 (suppl. 1), 1066910676 (2012).
  9. Seelig, J.D. et al. Nat. Methods 7, 535540 (2010).
  10. Naumann, E.A., Kampff, A.R., Prober, D.A., Schier, A.F. & Engert, F. Nat. Neurosci. 13, 513520 (2010).
  11. Ahrens, M.B. et al. Nature 485, 471477 (2012).
  12. Nitz, D. Nature 461, 889890 (2009).

Download references

Author information

Affiliations

  1. Vivien Marx is technology editor for Nature and Nature Methods

Corresponding author

Correspondence to:

Author details

Additional data