Computational imaging

Machine learning for 3D microscopy

Artificial neural networks have been combined with microscopy to visualize the 3D structure of biological cells. This could lead to solutions for difficult imaging problems, such as the multiple scattering of light.

How can researchers see inside an object without using invasive techniques, or recover 3D information by capturing only 2D images? This question was answered decades ago with the invention of tomography — a technique that computationally reconstructs 3D objects from a set of 2D images, usually captured from a range of projection angles. Tomography, which is used in magnetic resonance imaging and computerized tomography scanners for medical and other applications, conventionally provides an analytical solution to the 3D reconstruction problem. However, as the use of tomography expands to applications that involve complex scenarios, it is not always possible, or desirable, to devise analytical solutions. Now, machine-learning methods are turning optical tomography on its head with the use of algorithms borrowed from data science, which reconstruct the 3D refractive index of an object by solving a large-scale optimization problem. Writing in Optica, Kamilov et al.1 demonstrate this experimentally using a holographic optical-phase microscope.

Tomography is the quintessential example of computational imaging, a discipline that transcends conventional imaging techniques by simultaneously designing both the optical system and the image-processing algorithms. Together, the optics and the algorithms can achieve things that neither could do alone. For example, Kamilov et al. recover the 3D 'phase' of a biological cell — the nanometre-scale distortions of a wavefront as it passes through an object thus rendering transparent objects visible.

Kamilov et al. use machine-learning algorithms — computer programs that can learn from and make predictions based on input data — to give a boost to 3D phase imaging. By doing so, the authors bridge the fields of computational imaging and artificial neural networks (ANNs). The latter underlie a popular machine-learning framework that has found many applications2, from e-commerce and e-mail spam filtering to finding cat videos on YouTube. ANNs have been used to solve problems that involve big data (for example, image classification) and so they are a natural fit for computational microscopy.

Microscopists are swimming in data — they can easily collect terabytes of images in a few minutes. Easy access to large data sets creates the perfect opportunity for data-science approaches to image reconstruction. First, use all available knowledge about the sample (for example, an estimate of the number of bright spots within it) and about the imaging system (from optical physics) to constrain the problem, and then upload all the data to the computer and let the algorithm find the answer. Although there may not be an explicit analytical solution to the reconstruction problem using this approach, important information can still be teased out.

The authors use ANNs to attack the 3D phase-imaging problem, which is compounded by the complication of multiple scattering of light as it passes through a 3D biological sample. Multiple scattering is one of the most challenging problems in optics — if we solved it completely, we could see through fog, murky water or even human tissue. Physicists have tried for decades to undo scattering analytically, but it is difficult, if not impossible, to tackle large-scale problems that involve many scattering events. The authors' machine-learning approach is indirect (non-analytical), but gives a good solution that they verify experimentally.

Kamilov and co-workers adapt ANNs to work with the multi-slice method3, which has previously been used to describe multiple (dynamical) scattering of electrons in 3D crystal lattices. The authors model the target object as a set of slices: each slice is represented by a layer of the network and each pixel of the 3D object is represented by a network node (Fig. 1). The ANN's training data consist of a set of 2D holograms of the 3D object that are captured from different angles. The authors use a modified 'back-propagation' algorithm that predicts the 3D refractive index of the object by minimizing the differences between the training data and model solutions, with an added 'sparsity' constraint that enforces the smoothness of the solution. Multiple scattering is treated only in the general direction of the propagation — that is, backwards-reflected light is not included in the computations. Similar methods, applied to different hardware set-ups, have provided spatial resolution beyond the diffraction limit of an optical microscope4 or at the atomic scale in studies using electron microscopy5.

Figure 1: 3D image reconstruction with artificial neural networks.
figure1

Kamilov et al.1 use an artificial neural network (ANN) algorithm to describe how the phase of optical light is modified as it propagates through a 3D biological sample (here, a cell). The sample is modelled as a series of layers. Each pixel (circles) of the 3D model corresponds to a node of the ANN. These are connected to nodes in the subsequent layer (arrows) to represent the scattering of the input wavefront in the direction of propagation. The algorithm incorporates error correction by comparing a detector's measurements with the model's output — a 3D reconstruction of a cell's refractive index — and minimizing the difference between the two. (Figure adapted from Fig. 2 of the paper1.)

This work is part of a larger movement to revolutionize imaging techniques by rethinking both the optical design and the post-processing of the images. Fully leveraging the power of machine learning for microscopy could lead to methods that can see inside the human body and resolve individual cells by overcoming multiple scattering. However, we are a long way off, and for this to be achieved, physicists and engineers need to account properly for complications arising from back-scattered light and for the directional dependence (anisotropy) of the objects' optical properties. In this quest, extremely large imaging data sets will surely be required and researchers may need to follow promising frontiers in data science (such as deep learning2), or invent new ones.

Kamilov and co-workers' shift away from analytical solutions allows them to find an answer to the 3D imaging reconstruction problem, but such an approach does not always have a provably correct solution. This is not a problem in many of the applications of data science — no one dies if your cat-video search accidentally returns a dog video. But for scientific imaging applications, for example in medical settings, provability may be critical. As such, computational imaging brings a rich set of challenges for theorists and statisticians, as well as practitioners.Footnote 1

Notes

  1. 1.

    See all news & views

References

  1. 1

    Kamilov, U. S. et al. Optica 2, 517–522 (2015).

    ADS  CAS  Article  Google Scholar 

  2. 2

    LeCun, Y., Bengio, Y. & Hinton, G. Nature 521, 436–444 (2015).

    ADS  CAS  Article  Google Scholar 

  3. 3

    Cowley, J. M. & Moodie, A. F. Acta Crystallogr. 10, 609–619 (1957).

    CAS  Article  Google Scholar 

  4. 4

    Tian, L. & Waller, L. Optica 2, 104–111 (2015).

    ADS  Article  Google Scholar 

  5. 5

    Van den Broek, W. & Koch, C. T. Phys. Rev. Lett. 109, 245502 (2012).

    ADS  Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Laura Waller.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Waller, L., Tian, L. Machine learning for 3D microscopy. Nature 523, 416–417 (2015). https://doi.org/10.1038/523416a

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.