Researchers wanting to understand the workings of the brain need maps on many different scales — to link structure to function, to parse the complexities of memory loss or learning disability, or to find out what is different about the brains of people with neurodegenerative diseases. For instance, programmes such as the Human Connectome Project, funded by the US National Institutes of Health (NIH) in Bethesda, Maryland, and the Consortium of Neuroimagers for the Non-invasive Exploration of Brain Connectivity and Tracts (CONNECT), funded by the European Commission in Brussels, are producing magnetic resonance imaging (MRI) maps that follow neuronal connections across the entire brain, on a scale of tens of centimetres.

Credit: KHENG GUAN TOH/SHUTTERSTOCK

Some research teams are embarking on even more ambitious projects to create maps that reveal structure and connections at the scale of individual neurons. The slender processes of neurons — the axons and dendrites — are 20 micrometres or less in diameter, but can extend for several millimetres. Mapping the thicket of 86 billion neurons and their connections in the human brain at this scale is still a distant dream, but the goal of mapping the 75 million neurons in the mouse brain could be a little closer. And the second might pave the way for the first.

NEW ANGLES ON THE BRAIN A Nature special issue www.nature.com/neuroscience2013

To achieve this, however, researchers need new tools, starting with refinements to electron microscopy. In conventional electron-microscopy approaches, scientists have to make ultrathin slices of brain tissue and laboriously image them slice by slice to build up a three-dimensional (3D) picture. In 1986, the complete nervous system of one model organism was mapped using this approach — the 302 neurons of the nematode Caenorhabditis elegans1 — but the process is too slow and cumbersome to scale up. Now, electron microscopes at many universities are “being mothballed”, says neurobiologist Jeff Lichtman at Harvard University in Cambridge, Massachusetts, because they are deemed a throwback to an era when “all you could do was stain and look”.

Nevertheless, high-resolution, 3D neuronal mapping by electron microscopy is gaining momentum. “It's not just the old neuroanatomy gussied up with new machines,” says Lichtman, who studies the neuronal architecture of the mouse brain. “It's giving us three-dimensional information about structure at the super-resolution of nanometres — and that is invaluable.” These capabilities are spurring the development of new ways to prepare, image and analyse brain tissue from model organisms such as mice, and from people who have donated their brains to science on their death.

Winfried Denk estimates that image capture of a whole mouse brain could yield 60 petabytes of data. Credit: W. DENK/MAX PLANCK INST. FOR MED. RES.

Until now, when neuroscientists published cell-level images and analysis, they were looking at a tiny region of brain; for example, a few hundred neurons in the mouse or fruitfly2,3. Efforts to map all the neurons and circuits in the mouse brain are just beginning in several labs. Creating structural maps is a high priority of the NIH-funded Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, which US President Barack Obama launched in April4. Another large-scale effort is the Human Brain Project, funded by the European Union, which is focused on a different kind of map — a computational model of the brain.

Neurobiologist and microscopist Winfried Denk at the Max Planck Institute for Medical Research in Heidelberg, Germany, thinks that the technology to image an entire mouse brain at cellular resolution is within grasp. Late next year, he and Lichtman are each scheduled to receive a new multi-beam scanning electron microscope (SEM) for their labs, developed by Carl Zeiss Microscopy in Oberkochen, Germany. Now in the prototype stage, this instrument will enable the imaging of brain slices to be accelerated hugely — possibly by as much as 60 times. Such gains could make it feasible to map all the neurons in a mammalian brain. Other microscope manufacturers, such as FEI in Eindhoven, the Netherlands, are also developing tools to enable detailed brain mapping.

Preparing to see

Before brain tissue can be imaged using electron microscopy, samples must be prepared by slicing and staining. These steps are important for the subsequent process of using the two-dimensional images generated by the electron microscope to reconstruct the three dimensions of the brain and its neuronal connections. Scientists are exploring new ways to prepare samples so that they can obtain higher contrast for electron microscopy and view larger pieces of brain tissue.

When brain tissue is well stained with chemicals, thinly sliced and viewed using electron microscopy, researchers see slices that look like thin plates covered with tiny soap bubbles. Some bubbles represent the cross-section of a neuron, and the soap-bubble boundary shows where one neuron ends and another begins. Contrast is crucial, because scientists need to discern neurons from a wealth of other cell types and organelles in the brain tissue. They can then trace each neuron in each of the imaged slices, by hand and by eye.

But such imaging has its problems: staining often does not show enough structure, the slices might be warped and images can be blurry.

Denk and his team previously developed an approach called serial block-face electron microscopy (SBEM), which obviates the need to prepare slices before imaging them. Instead, successive images are obtained by scanning the face of an unsliced block of tissue placed in the electron microscope, then cutting off an ultrathin slice using an automated microtome within the instrument. The newly exposed surface of the sliced block is rescanned, and so on until a stack of images has been obtained. The advantage of imaging the unsliced tissue, Denk says, is that it does not matter if the slice itself crumples.

Until now, Denk and his team have cut only from pieces of tissue that measured much less than 1 mm across. But Denk's ultimate aim is to image and cut slices from a whole mouse brain, which will mean dealing with tissue blocks that are some 10 mm across. Denk is now building a whole-brain microtome to incorporate into the microscope. This will require a diamond knife that is 8–10 mm wide instead of the 1-mm knife used at present, he says.

See PDF Credit: SOURCE: DIATOME

To make these knives, he is collaborating with diamond-knife manufacturer Diatome in Biel, Switzerland. The longer blade width is a challenge, says Diatome engineer Helmut Gnaegi. At 10 mm, it is around ten times wider than a typical diamond knife, and the blade must be made from a large diamond that is free of crystalline imperfections (see 'Sharp tools').

The entire 10-mm cutting edge must be polished to perfection, Gnaegi says, because even a single knife mark would interfere with the smooth sample surface that Denk requires. After polishing, the cutting edge must have a maximum thickness of only 4 nm. This is a “formidable challenge” that the company hopes to overcome by the end of this year, says Gnaegi. He and his team have designed special polishing equipment for the task.

Gnaegi's team is also addressing the build-up of electrostatic charge that occurs when the knife tries to cut through tissue that has been embedded in non-conducting resin. The charge tends to make the sections stick to the sample surface, which makes imaging difficult. “An electrically conductive knife surface with optimized gliding properties leads to a sample surface free of cutting debris,” says Gnaegi.

The SBEM approach helps with precise rendering of tissue in three dimensions, but it is hard to make it high-throughput. Researchers would have to set up many microscopes in a dedicated facility. The alternative is an instrument that is “intrinsically a parallel imaging machine”, Denk says. That is why his lab is so interested in the Zeiss multi-beam machine that is due next year.

Lichtman and his team take a different approach to sample preparation. He has developed an automatic tape-collecting ultra-microtome (ATUM), which slices tissue, places each slice on a tape and delivers the slices to the microscope in a conveyor-belt fashion. Still other labs are working on different tissue-slicing approaches. “One of them will eventually emerge as the way to do things, probably,” says Denk.

Many labs are working on ways to improve the conventional tissue-staining methods used for electron microscopy, which rely on heavy metals such as lead and osmium. For example, Shawn Mikula, a postdoc in Denk's team, has been experimenting with immersing whole, unsliced mouse brains in various staining chemicals for extended periods to enhance the spread of osmium throughout the tissue and to achieve adequate contrast in the electron microscope. This procedure worked well for imaging axons, Denk says, but was less successful for imaging the finer processes5. Given time, he says, “we're pretty confident that we'll figure it out.”

Beam bonanza

As part of their collaboration with Zeiss, both Lichtman and Denk deliver test samples to the company, which are imaged using the prototype microscope. At present, the number of scientist-testers has to be limited, and the instrument is not available for pre-order.

Zeiss's prototype electron microscope uses 61 beams in a hexagonal configuration (left). The beams simultaneously scan sub-images (shown in yellow and blue squares) that merge into a large image (right). Credit: CARL ZEISS MICROSCOPY

Looking at images generated by the device has been “very exciting, because you realize how much faster those images were taken”, says Lichtman. When he and his team run experiments, the lab generates around 1 terabyte of image data a day. With the new microscope, he says, the rate will be more like 3 terabytes per hour.

Using his tape-collecting microtome with the new instrument, Lichtman plans to slice a mouse brain into many tens of thousands of sections, which can be put into the machine in large groups. “It images all of them and then you put the next several hundred or thousand in,” he says.

Dirk Zeidler (left) and Gregor Dellemann are developing the new Zeiss microscope. Credit: CARL ZEISS MICROSCOPY

The Zeiss multiple-beam SEM resembles a standard SEM in many ways, says Dirk Zeidler, the physicist who led its design. It has an electron source, lenses and a scanning unit that moves the beam across the sample surface horizontally, in the same way as a reader's eye moves over lines of text. It is built to be compatible with the various ways in which neuroscientists prepare tissue for imaging.

But the differences from a conventional SEM are crucial. The machine has been engineered for a wide view of the tissue and for speed, says Gregor Dellemann, who is the business-development manager for the new microscope. The most obvious difference is that the instrument has 61 electron beams scanning across the sample instead of the one beam in a conventional SEM, and an array of 61 secondary electron detectors (see 'Speed reading'). It also dispenses with features such as energy-dispersive X-ray spectroscopy detectors or back-scattered electron detectors, both of which are used in standard electron microscopes to map variations in a sample's chemical composition.

Dellemann explains that including several detector types would mean incorporating three arrays of 61 detectors each, and would require a mechanism to guide the right signal to each one. Although possible, such a configuration would add cost and complexity to the instrument and slow it down.

Some scientists had voiced doubts about the multiple-beam SEM when they first heard about it, says Zeidler, who notes that they even questioned whether it might violate the principles of optics. But they recanted when shown a working instrument as it generated images, he says, adding “that was quite fun”.

The new instrument addresses the speed limits and blind spots of a standard SEM. When the beam of electrons hits the sample, it emits secondary electrons that give information about, for example, the sample's surface shape and staining. Detectors record the intensities of the secondary electrons and the instrument analyses these data to give an image of the sample.

See PDF Credit: CARL ZEISS MICROSCOPY

To speed up this process, the beam would need to move more quickly and detect secondary electrons in less time. But when detection times fall below about 50 nanoseconds, the images get noisy. Furthermore, in the brief moment when electrons hit the detector, “it goes blind”, Dellemann says. No data can be collected until this 'dead time' is over and the detector 'sees' again.

Enhancing the abilities of an SEM using multiple electron beams and detectors for wider and quicker imaging requires attention to be paid to basic physics. Image quality suffers if the multiple beams are too close to each other because the electrons repel one another. “It will get blurry, which means you do not see the features you want to see,” says Zeidler.

The solution was to spread the charge over a larger area, minimizing the forces between electrons. More beams cover a larger sample area, and more detectors mean that the temporary blind spots are less of a hindrance. But the team still had to optimize the number of beams and their spacing, Dellemann says. Adding electron beams in a hexagonal configuration turned out to be the best geometry, and 61 beams with 61 detectors turned out to be the optimal number.

Most SEMs detect electrons for intervals of around 50 nanoseconds, such that the observer can image 20 million pixels per second before any physical limits are reached, Dellemann says. In a multi-beam instrument, each beam has a dedicated detector in the array and the 50-nanosecond detection speed stays the same. So with 61 beams, the images can get bigger. In 1 second, Dellemann says, the instrument can take images of up to 1.2 billion pixels.

Zeiss started developing the instrument in 2000, when the company began exploring multi-beam SEM for the semiconductor industry in partnership with Applied Materials of Santa Clara, California. The aim was to develop technology that could inspect semiconductors using partially automated, parallel imaging. A few years later, Zeiss realized the potential of this technology for the life sciences, with the most interest coming from neuroscientists who wanted high-resolution brain mapping.

Energy boost

At microscope company FEI, engineers are souping up SEMs in a different way, using a single electron beam. The prototype of a new multi-energy deconvolution SEM is currently undergoing testing at the company, says Ben Lich, who handles business development for the life sciences at FEI. This instrument, too, is intended to speed up neuronal mapping.

Lich and his team have long worked with neuroscientists to optimize the standard components of SEMs. The US BRAIN Initiative and other ventures are encouraging the company to “do a bit more” for neuroscience, Lich says, so that researchers in the field can extract more information from their samples.

Current technology has a hard time imaging the more slender processes. To see cell bodies and axons, tissue sections need to be 30–40 nm thick, but to see the fine processes, they can be only 10–20 nm thick. Missing thinner branches in the imaging increases the risk of inaccuracies when reconstructing 3D circuits of connected neurons.

To address this challenge, FEI's new microscope performs 'sub-surface imaging'. It takes multiple images of the same area of a sample and is able to image below the surface. It can help scientists to track down finer processes that they may have missed at a different resolution, Lich says.

The instrument sends sequential electron beams into the sample at differing speeds — starting at 1 kilovolt and ramping up to around 2.5 kV. This way, it obtains more information from the sample than a conventional single-scan microscope can, Lich says. After it acquires images, the instrument applies a mathematical technique called deconvolution, which separates the information from the different depths in the sample and produces virtual sections.

Brain mapping is “not just the old neuroanatomy gussied up with new machines”, says Jeff Lichtman. Credit: JEFF LICHTMAN/HARVARD UNIV.

The instrument lets researchers scan their samples with varying resolution. A first image at low magnification might reveal cell bodies, which are important for neurobiology but not for understanding neuronal connectivity. An area that is densely forested with finer processes might merit a high-resolution scan.

Microscope manufacturers say that their experience in other fields helps with the challenge of whole-brain mapping, which will involve imaging many samples for months at a time. FEI, for example, has microscope customers who are checking the quality of small structures on semiconductors or determining the association of minerals and precious metals in mining ores. Their instruments run 24 hours a day, seven days a week for many months, says Lich, and they hold up.

The engineering of microscopes for neuroscience in particular also means helping to achieve better contrast from all samples, including those that might not be well stained. Lich says that FEI has designed its electron-beam column to optimize detection sensitivity.

In juggling such factors as data quality, scanning speed and robustness, FEI's team designed the multi-energy SEM so that its detectors for back-scattered electrons are engineered for speed, particularly at low energies. Using back-scatter signals is a more stable way to create images, Lich says. Secondary electron detectors are more prone to showing sample-charging problems, which is when the interaction between the electron beam and the sample leads to distorted images. In areas where the sample is charged, the acquired data can lead to ambiguous interpretations of neuronal connectivity. Back-scatter images are not as sensitive to this problem, he says.

For neuron mappers, FEI's method for SEM imaging offers a way to slice a sample virtually before physically cutting it, Lich says. The microscope's electron beam hits an area multiple times, generating a virtual image stack. Rather than needing to physically slice tissue to much less than 40 nm thickness, the sample can be imaged ten times at 4-nm resolution to create a virtual slice of the desired width. The technique could reduce the need for ultrathin slicing and the consequent risk of sample damage and distortion. Physical slices could remain relatively thicker, and intermediate layers in the slice would still be resolved, says Lich. And for a scientist using Denk's approach of imaging followed by slicing, the tissue can be virtually imaged several times to different depths before it is physically shaved off.

Under an electron microscope, mouse brain tissue shows neuronal cells (including myelinated axons; black rings) and organelles such as mitochondria. Careful analysis reveals where neurons end and begin. Credit: FEI

The new SEM technology quickly creates issues of scalability for those who want to image large brain sections, Lich says. If scientists want data at 5-nm resolution, every voxel — a 3D pixel — must be 5 nm on each side. At this resolution, the FEI microscope can obtain images, but imaging a cube of brain tissue measuring 1 mm on each side would yield 8 petavoxels of data — and those could take years to analyse, says Lich.

In his view, the neuroscience community is pushing microscope development with its structural mapping projects. Some, he says, might call the plan to image the whole mouse brain outrageous. “In principle, nothing is impossible — but it is really an extraordinary challenge,” he says.

Data analysis

As neuroscience ramps up its mapping efforts, the challenge of big data becomes more apparent. Mapping the human brain is as difficult as creating a 3D map of a city 10,000 times the size of Tokyo, London or New York, and locating every inhabitant in every building, street, stairwell, lift and subway. To establish the same connectivity that neuroscientists are hoping to achieve, the mapping would then have to unearth all lines of communication — personal interactions as well as those by phone, post and e-mail — between all inhabitants of these 10,000 cities, explains Moritz Helmstaedter at the Max Planck Institute of Neurobiology in Munich, Germany.

Denk estimates that capturing images of a whole mouse brain with the new Zeiss SEM could deliver 60 petabytes of data. And Lichtman notes that such large amounts of data are going to be even more challenging to share than they are now. At present, he sends his microscopy data sets to the Open Connectome project at Johns Hopkins University in Baltimore, Maryland. The project's website is an open repository for data accrued from different labs working on the brains of model organisms such as mice and nematodes. It enables anyone to download information from MRI, electron-microscopy and brain-wiring studies.

Lichtman has one data set that is around 100 terabytes in size. “How do you give that to somebody?” he asks. As data sets get even larger, the community will have to explore ways to make data available to scientists for perusal without download, he says.

Fluorescence microscopy can show many types of mouse neuron using the 'Brainbow' technique. Credit: HARVARD UNIV./LICHTMAN LAB

The next-generation microscopes can deliver the high-resolution images, but new approaches, including computational tools, are needed to build 3D wiring diagrams of neuronal circuits based on those images.

Denk and colleagues in the United States and Germany have recently used SBEM to map a small section of mouse retina2. Even with computer software, tracing the neuronal pathways in the images would have taken vastly more time and more people than are available in a typical lab. The team turned to crowd-sourcing, enlisting the help of a host of non-scientists sitting at their home computers.

Software to help researchers with these tasks is being developed, and computer scientists plan to keep pace with the advances in microscope technology, Lichtman says. Image-analysis software under development includes tools with built-in machine learning and other types of algorithm to automate or semi-automate the tasks.

Denk says that he used to think that researchers had to solve data-analysis problems before imaging larger brain volumes and generating huge data sets. But his view has changed. “My current thinking is that we'll just produce a large volume,” Denk says. “Then, that will embarrass the analysis people into coming up with something.”

Neuroscientist Sebastian Seung at the Massachusetts Institute of Technology in Cambridge, who collaborated on the mouse retina mapping study2, leads a group that launched the online game EyeWire. More than 80,000 volunteers worldwide have helped to trace the complete 3D contours of neurons running through the game's electron-microscopy images. He is undaunted by the large data sets that brain mappers will be sending his way for analysis. “Why would we be frightened?” he says. “It is just an opportunity.” Size in image data sets is a plus when imaging the mammalian nervous system. Imaging too small a section risks missing neuronal connections, he says.

Still, the challenge in mammals is formidable. When scientists image fruitfly brain tissue, a small imaging volume can capture much of a neuronal circuit, because many neurons do not run and branch over long distances, says Seung. In mammals, many researchers trace connections found in the retina, because much of the circuit can be captured with a small tissue volume, he says. But that is already a huge challenge — and it gets worse when tracing connections in the brain, because neuronal circuits are spread out over much larger volumes in mammals than in flies. “You can't really map out connections and circuits unless the volume you are imaging is reasonably large,” he says.

Lichtman believes that the neurobiology community will ultimately find a way to garner the required knowledge because researchers stand to learn so much about the brain. “Without it, there are lots of mysteries,” he says. “With it, mysteries become just facts.”