Confocal non-line-of-sight imaging based on the light-cone transform

  • Nature volume 555, pages 338341 (15 March 2018)
  • doi:10.1038/nature25489
  • Download Citation


How to image objects that are hidden from a camera’s view is a problem of fundamental importance to many fields of research1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector14,15,16,17,18,19. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections21,22,23,24, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.

  • Subscribe to Nature for full access:



Additional access options:

Already a subscriber?  Log in  now or  Register  for online access.


  1. 1.

    Looking through walls and around corners. Physica A 168, 49–65 (1990)

  2. 2.

    , , , & Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate. Science 253, 769–771 (1991)

  3. 3.

    . et al. Optical coherence tomography. Science 254, 1178–1181 (1991)

  4. 4.

    et al. Non-invasive imaging through opaque scattering layers. Nature 491, 232–234 (2012)

  5. 5.

    , , & Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photon. 8, 784–790 (2014)

  6. 6.

    , & Looking around corners and through thin turbid layers in real time with scattered incoherent light. Nat. Photon. 6, 549–553 (2012)

  7. 7.

    , , & Observation of two-photon “ghost” interference and diffraction. Phys. Rev. Lett. 74, 3600–3603 (1995)

  8. 8.

    , & “Two-photon” coincidence imaging with a classical source. Phys. Rev. Lett. 89, 113601 (2002)

  9. 9.

    et al. Dual photography. ACM Trans. Graph. 24, 745–755 (2005)

  10. 10.

    . et al. In IEEE 16th Int. Conference on Computer Vision 2270–2278 (IEEE, 2017);

  11. 11.

    , , , & Tracking objects outside the line of sight using 2D intensity images. Sci. Rep. 6, 32491 (2016)

  12. 12.

    , , , & Non-line-of-sight tracking of people at long range. Opt. Express 25, 10109–10117 (2017)

  13. 13.

    , , , & Detection and tracking of moving objects hidden from view. Nat. Photon. 10, 23–26 (2016)

  14. 14.

    ., ., & In IEEE 12th Int. Conference on Computer Vision 159–166 (IEEE, 2009);

  15. 15.

    et al. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun. 3, 745 (2012)

  16. 16.

    , , , & Non-line-of-sight imaging using a time-gated single photon avalanche diode. Opt. Express 23, 20997–21011 (2015)

  17. 17.

    , , , & Reconstruction of hidden 3D shapes using diffuse reflections. Opt. Express 20, 19096–19108 (2012)

  18. 18.

    et al. In Computer Vision – ECCV 2012 (eds , et al.) 542–555 (Springer, 2012);

  19. 19.

    ., ., & In Proc. IEEE Conference on Computer Vision and Pattern Recognition 7216–7224 (IEEE, 2017);

  20. 20.

    ., ., & In Proc. IEEE Conference on Computer Vision and Pattern Recognition 3222–3229 (IEEE, 2014);

  21. 21.

    LIDAR: mapping the world in 3D. Nat. Photon. 4, 429–430 (2010)

  22. 22.

    et al. Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection. Opt. Express 21, 8904–8915 (2013)

  23. 23.

    et al. First-photon imaging. Science 343, 58–61 (2014)

  24. 24.

    et al. Photon-efficient imaging with a single-photon camera. Nat. Commun. 7, 12046 (2016)

  25. 25.

    Light-in-flight recording by holography. Opt. Lett. 3, 121–123 (1978)

  26. 26.

    et al. Femto-photography: capturing and visualizing the propagation of light. ACM Trans. Graph. 32, 44 (2013)

  27. 27.

    et al. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2289–2297 (IEEE, 2017);

  28. 28.

    Raum und zeit. Phys. Z. 10, 104–111 (1909)

  29. 29.

    Extrapolation, Interpolation, and Smoothing of Stationary Time Series Vol. 7 (MIT Press, 1949)

  30. 30.

    ., & Physically Based Rendering: From Theory to Implementation 3rd edn (Morgan Kaufmann, 2017)

Download references


We thank K. Zang for his expertise and advice on the SPAD sensor. We also thank B. A. Wandell, J. Chang, I. Kauvar, N. Padmanaban for reviewing the manuscript. M.O’T. is supported by the Government of Canada through the Banting Postdoctoral Fellowships programme. D.B.L. is supported by a Stanford Graduate Fellowship in Science and Engineering. G.W. is supported by a National Science Foundation CAREER award (IIS 1553333), a Terman Faculty Fellowship and by the KAUST Office of Sponsored Research through the Visual Computing Center CCF grant.

Author information


  1. Department of Electrical Engineering, Stanford University, Stanford, California 94305 USA

    • Matthew O’Toole
    • , David B. Lindell
    •  & Gordon Wetzstein


  1. Search for Matthew O’Toole in:

  2. Search for David B. Lindell in:

  3. Search for Gordon Wetzstein in:


M.O’T. conceived the method, developed the experimental setup, performed the indoor measurements and implemented the LCT reconstruction procedure. M.O’T. and D.B.L. performed the outdoor measurements. D.B.L. applied the iterative LCT reconstruction procedures shown in Supplementary Information. G.W. supervised all aspects of the project. All authors took part in designing the experiments and writing the paper and Supplementary Information.

Competing interests

The authors declare no competing financial interests.

Corresponding authors

Correspondence to Matthew O’Toole or Gordon Wetzstein.

Reviewer Information Nature thanks D. Faccio, V. Goyal and M. Laurenzis for their contribution to the peer review of this work.

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

PDF files

  1. 1.

    Supplementary Information

    This file contains Supplementary Methods, a Supplementary Discussion, Supplementary Results and Derivations supporting the main manuscript.

Zip files

  1. 1.

    Supplementary Information

    This file contains source code and data for Confocal NLOS imaging. It contains the MATLAB code and data for reproducing LCT and back-projection results that appear in the manuscript and supplementary information.


  1. 1.

    Confocal NLOS Imaging Based on the Light Cone Transform

    High-level overview of confocal NLOS imaging.

  2. 2.

    Confocal NLOS Imaging Based on the Light Cone Transform

    A compilation of results from manuscript and supplementary information.


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.