Abstract
Reconstructing a scene’s 3D structure and reflectivity accurately with an active imaging system operating in lowlightlevel conditions has wideranging applications, spanning biological imaging to remote sensing. Here we propose and experimentally demonstrate a depth and reflectivity imaging system with a singlephoton camera that generates highquality images from ∼1 detected signal photon per pixel. Previous achievements of similar photon efficiency have been with conventional rasterscanning data collection using singlepixel photon counters capable of ∼10ps time tagging. In contrast, our camera’s detector array requires highly parallelized timetodigital conversions with photon timetagging accuracy limited to ∼ns. Thus, we develop an arrayspecific algorithm that converts coarsely timebinned photon detections to highly accurate scene depth and reflectivity by exploiting both the transverse smoothness and longitudinal sparsity of natural scenes. By overcoming the coarse time resolution of the array, our framework uniquely achieves high photon efficiency in a relatively short acquisition time.
Introduction
Active optical imaging systems use their own light sources to recover scene information. To suppress photon noise inherent in the optical detection process, they typically require a large number of photon detections. For example, a commercially available flash camera typically collects >10^{9} photons (10^{3} photons per pixel in a 1 megapixel image) to provide the user with a single photograph^{1}. However, in remote sensing of a dynamic scene at a long standoff distance, as well as in microscope imaging of delicate biological samples, limitations on the optical flux and integration time preclude the collection of such a large number of photons^{2,3}. A key challenge in such scenarios is to make use of a small number of photon detections to accurately recover the desired scene information. Exacerbating the difficulty is that, for any fixed total acquisition time, serial acquisition through raster scanning reduces the number of photon detections per pixel. Accurate recovery from a small number of photon detections has not previously been achieved in conjunction with parallel acquisition at a large number of pixels, as in a conventional digital camera.
Our interest is in the simultaneous reconstruction of scene threedimensional (3D) structure and reflectivity using a small number of photons, something that is important in many realworld imaging scenarios^{4,5,6}. Accurately measuring distance and estimating a scene’s 3D structure can be done from timeofflight data collected with a pulsedsource light detection and ranging system^{7,8,9}. For applications specific to lowlightlevel 3D imaging, detectors that can resolve individual photon detections—from either photomultiplication^{10} or Geigermode avalanche operation^{11}—must be used in conjunction with a time correlator. These timecorrelated singlephoton detectors provide extraordinary sensitivity in timetagging photon detections, as shown by the authors of ref. 12, who used a timecorrelated singlephoton avalanche diode (SPAD) array to track periodic light pulses in flight.
The stateoftheart in high photonefficiency depth and reflectivity imaging was established by the authors of firstphoton imaging (FPI)^{13}, who demonstrated accurate 3D and reflectivity recovery from the first detected photon at each pixel. Their setup, which used raster scanning and a timecorrelated single SPAD detector, required exactly one photon detection at each pixel, making each pixel’s acquisition time a random variable. Consequently, FPI is not applicable to operation using a SPAD camera^{14,15,16,17,18,19,20,21}—all of whose pixels must have the same acquisition time—thus precluding FPI’s reaping the marked imageacquisition speedup that the camera’s detector array affords^{12,22,23}. Although there have been extensions of FPI to the fixed acquisitiontime operation needed for array detection^{24,25}, both their theoretical modeling and experimental validations were still limited to raster scanning, with a single SPAD detector. As a result, they ignored the limitations of currently available SPAD cameras—much poorer timetagging performance and pixeltopixel variations of SPAD properties—implying that these initial fixed acquisitiontime (pseudoarray) frameworks will yield suboptimal depth and reflectivity reconstructions when used with lowlight experimental data from an actual SPAD camera.
Here we propose and demonstrate a photonefficient 3D structure and reflectivity imaging technique that can deal with the aforementioned constraints that SPAD cameras impose. We give the first experimental demonstration of accurate timecorrelated SPADcamera imaging of natural scenes obtained from ∼1 detected signal photon per pixel on average. Unlike prior work, our framework achieves high photon efficiency by exploiting the scene’s structural information in both the transverse and the longitudinal domains to censor extraneous (background light and dark count) detections from the SPAD array. Earlier works that exploit longitudinal sparsity only in a pixelbypixel manner require more detected signal photons to produce accurate estimates^{26,27}. Because our new imager achieves highly photonefficient imaging in a short dataacquisition time, it paves the way for dynamic and noisetolerant active optical imaging applications in science and technology.
Results
Imaging setup
Our experimental setup is illustrated in Fig. 1. The illumination source was a pulsed laser diode (PicoQuant LDH series with a 640nm center wavelength), whose original outputpulse duration was increased to a fullwidth at halfmaximum of ∼2.5 ns (that is, a root mean square (r.m.s.) value of T_{p}≈1 ns). The laser diode was pulsed at a T_{r}≈50 ns repetition period set by the SPAD array’s trigger output. A diffuser plate spatially spread the laser pulses to flood illuminate the scene of interest. An incandescent lamp injected unwanted background light into the camera. The lamp’s power was adjusted so that (averaged over the region that was imaged) each detected photon was equally likely to be due to signal (backreflected laser) light or background light. A standard Canon FL series photographic lens focused the signal plus background light on the SPAD array. Each photon detection from the array was time tagged relative to the time of the most recently transmitted laser pulse and recorded (Supplementary Methods).
The SPAD array^{18,20}, covering a 4.8 × 4.8mm footprint, consists of 32 × 32 pixels of fully independent Si SPADs and complementary metaloxidesemiconductor (CMOS)based electronic circuitry that includes a timetodigital converter for each SPAD detector. The SPAD within each 150 × 150μm pixel has a 30μmdiameter circular active region, giving the array a 3.14% fill factor. At the 640nm operating wavelength, each array element’s photondetection efficiency is ∼20% and its dark count rate is ∼100 Hz at room temperature. To extend the region that could be imaged and increase the number of pixels, we used multiple image scans to form a largersize composite image. In particular, we mounted the SPAD array on a feedbackcontrolled, twoaxis motorized translation stage to produce images with N_{x} × N_{y}=384 × 384 pixels (Supplementary Figs 1a,b and 2a–c).
The SPAD array has a Δ=390 ps time resolution set by its internal clock rate. We set each acquisition frame length to 65 μs, with a gateon time of 16 μs and a gateoff time of 49 μs for limiting power dissipation of the chip and for data transfer. At the start of each frame, the SPAD array was set to trigger the laser to generate pulses at a ∼20 MHz repetition rate. Hence, in the 16 μs gateon time of each frame, ∼320 pulses illuminated the scene Supplementary Fig. 3).
Observation model
We define to be the scene’s 3D structure and reflectivity that we aim to recover, and we let be the average rates of backgroundlight plus darkcount detections. Flood illumination of the scene at time t=0 with a photonflux pulse s(t) then results in the following Poissonprocess rate function for (i,j)th pixel of the composite image:
where η_{i,j}∈(0,1] is the (i,j)th detector’s photondetection efficiency and c is the speed of light. Fabrication imperfections of the SPAD array cause some pixels to have inordinately high darkcount rates , making their detection times uninformative in our imaging experiments because they are predominantly from dark counts. Thus we performed camera calibration to determine the set of these ‘hot pixels’ (2% of all pixels in our case) so that their outputs could be ignored in the processing of the imaging data.
We define to be the total number of time bins in which the photon detections can be found, and let C_{i,j,k} be the observed number of photon counts in the kth time bin for pixel (i,j) after n_{s} pulsedillumination trials. By the theory of photon counting^{28}, we have that C_{i,j,k}’s statistical distribution is
for k=1,2,…,N_{z}, where we have assumed that the pulse repetition period is long enough to preclude pulse aliasing artifacts. Also, we operate in a lowflux condition such that ∑_{k=1}^{N}_{z}C_{i,j,k}, the total number of detections at a pixel, is much less than n_{s}, the total number of illumination pulses to avoid pulsepileup distortions. Our imaging problem is then to construct accurate estimates, and , of the scene’s reflectivity A and 3D structure Z, using the sparse photondetection data .
3D structure and reflectivity reconstruction
In the lowflux regime, wherein there are very few detections and many of them are extraneous, an algorithm that relies solely on the aforementioned pixelwise photodetection statistics has very limited robustness. We aim to achieve high photon efficiency by combining those photodetection statistics with prior information about natural scenes.
Most natural and manmade scenes have strong spatial correlations among neighbouring pixels in both transverse and longitudinal measurements, punctuated by sharp boundaries^{29}. While conventional works normally treat each pixel independently, our imaging framework exploits these correlations to censor/remove extraneous (and randomly distributed) photondetection events due to background light and detector dark counts. It should be noted that unlike noise mitigation via spatial filtering and averaging, in which fine spatial features are washed out due to oversmoothing, our technique retains the spatial resolution set by the SPAD array. Our reconstruction algorithm optimizes between two constraints for a given set of censored measurements: that the 3D and reflectivity image estimates come from a scene that is correlated in both the transverse and longitudinal domains, and that the estimates employ the Poisson statistics of the raw singlephoton measurements.
The implementation of our reconstruction algorithm can be divided into the following three steps (Fig. 2; Supplementary Note 1).
Step 1: natural scenes have reflectivities that are spatially correlated—the reflectivity at a given pixel tends to be similar to the values at its nearest neighbours—with abrupt transitions at the boundaries between objects. We exploit these correlations by imposing a transversesmoothness constraint using the totalvariation (TV) norm^{30} on our reflectivity image. In this process, we ignore data from the hotpixel set . The final reflectivity image is thus obtained by solving a regularized optimization problem.
Step 2: natural scenes have a finite number of reflectors that are clustered in depth. It follows that in an acquisition without backgroundlight or darkcount detections, the set of detection times collected over the entire scene would have a histogram with N_{z} bins that possesses nonzero entries in only a small number of small subintervals. This longitudinal sparsity constraint is enforced in our algorithm by solving a sparse deconvolution problem from the coarsely timebinned photondetection data, which is specific to the array imaging setup, to obtain a small number of representative scene depths. Raw photondetection events at times corresponding to depths differing by more than cT_{p}/2 from the representative scene depths are censored. As step 2 has identified coarse depth clusters of the scene objects, the next step of the algorithm uses the filtered set of photon detections to determine a highresolution depth image within all identified clusters.
Step 3: similar to what was done in step 1 for reflectivity estimation, we impose a TVnorm spatial smoothness constraint on our depth image, where data from the hotpixel set and censored detections at the remaining pixels are ignored. Thus, we obtain by solving a regularized optimization problem.
Reconstruction results
Figure 3 shows experimental results of 3D structure and reflectivity reconstructions for a scene comprised of a mannequin and sunflower when, averaged over the scene, there was ∼1 signal photon detected per pixel and ∼1 extraneous (background light plus dark count) detection per pixel. In our experiments, the perpixel average photoncount rates of backreflected waveform and backgroundlight plus darkcount response were 1,089 counts/s and 995 counts/s, respectively. The image resolution was 384 × 384 for this experiment. We compare our proposed method with the baseline pixelwise imaging method that uses filtered histograms^{22} and the stateoftheart pseudoarray imaging method^{25}.
From the visualization of reflectivity overlaid on depth, we observe that the baseline pixelwise imaging method (Fig. 3a,e) generates noisy depth and reflectivity images without useful scene features, owing to the combination of lowflux operation and highbackground detections plus detector dark counts. In contrast, the existing pseudoarray method—which exploits transverse spatial correlations, but presumes constant B_{i,j}—gives a reflectivity image that captures overall object features, but is oversmoothed to mitigate hotpixel contributions (Fig. 3b). Furthermore, because the pseudoarray method presumes the 10psclass time tagging of a singleelement SPAD that is used in rasterscanning setups, its depth image fails to reproduce the 3D structure of the mannequin’s face from the nsclass time tagging afforded by our SPAD camera’s detector array. In particular, it overestimates the head’s dimensions and oversmooths the facial features (Fig. 3f), whereas our arrayspecific method accurately captures the scene’s 3D structure and reflectivity (Fig. 3c,g). This accuracy can be seen by comparing our framework’s result with the highflux pixelwise depth and reflectivity images (Fig. 3d,h)—obtained by detecting 550 signal photons per pixel and performing timegated pixelwise processing—that serve as groundtruth proxies for the scene’s actual depth and reflectivity. For the fairest comparisons in Fig. 3, each algorithm—baseline pixelwise processing, pseudoarray processing and our new framework—had its parameters tuned to minimize the meansquared degradation from the groundtruth proxies.
The depth error maps in Fig. 3i–k quantify the resolution improvements from our imager over the existing ones for this lowflux imaging experiment. Although the mean number of signal photon detections is ∼1 per pixel, in the highreflectivity facial regions of the mannequin the average is ∼8 signal photon detections per pixel, while the number of signal photons detected at almost every pixel in the background portion of the scene is 0. Despite this fact, the pseudoarray imaging technique leads to a face estimate with high depth error due to oversmoothing incurred in its effort to mitigate background noise. It particularly suffers at reconstructing the depth boundaries with lowphoton counts as well. Compared with conventional methods, our framework gives a much better estimate of the full 3D structure. We also study how the depth error of our framework depends on the number of photon counts at a given pixel. For example, our framework gives errors of 4.4 and 0.9 cm at pixels (260, 121) and (107, 187), which correspond to 1photoncount depth boundary region and 8photoncount mannequin face, respectively. Overall, we observe a negative correlation between the depth error and the number of photons detected at a pixel for our method (Supplementary Fig. 4). Recall that the time bin duration of each pixel of the SPAD camera is Δ=390 ps, corresponding to cΔ/2≈6cm depth resolution. Overall, our imager successfully recovers depth with mean absolute error of 2 cm and thus with subbinduration resolution, while existing methods fail to do so.
In terms of measuring the improvements in grayscale imaging, we can compute the peak signaltonoise ratio (PSNR) between the reference groundtruth reflectivity and the estimated reflectivity. While the PSNR values of the conventional pixelwise estimation and pseudoarray imaging are 19.3 and 24.9 dB, respectively, that of our method is 29.1 dB; it improves over both existing methods by at least 4 dB. We emphasize the difficulty of singlephoton imaging in our setup by computing that the SNR of the timeofflight of a single photon ranges from −2.2 to 7.8 dB (Supplementary Note 2). We also note that varying the regularization parameters also affects the imaging performance (Supplementary Fig. 5). Lastly, the robustness of our reconstruction algorithm is evaluated by imaging an entirely different scene consisting of watering can and basketball, using regularization parameters pretrained on the mannequin scene (Supplementary Fig. 6).
Choice of laser pulse root mean square time duration
For a transformlimited laser pulse, such as the Gaussian s(t) that our imaging framework presumes, the r.m.s. time duration T_{p} is a direct measure of system bandwidth. As such, it has an impact on the depthimaging accuracy in lowflux operation. This impact is borne out by the simulation results in Fig. 4, where we see that the pulse waveform with the shortest r.m.s. duration does not provide the best depth recovery. Thus, in our experiments, we broadened the laser’s output pulse to T_{p}≈1 ns. The fullwidth at halfmaximum is then 2.4 ns. This pulse duration allowed our framework to have a mean absolute depth error of 2 cm and resolve depth features well below the cΔ/2≈6 cm value set by the SPAD array’s 390psduration time bins (see Fig. 4 for details on depthrecovery accuracy versus r.m.s. pulse duration).
For application of our framework to different array technology, the optimal pulsewidth should scale with the SPAD camera’s timebin duration. For example, if our SPAD hardware were replaced to improve the timing resolution to the 50ps range^{17}, we would want to make sure the r.m.s. pulsewidth remains approximately three times longer than the time bin (or the fullwidth at halfmaximum is approximately six times longer) based on our method for choosing optimal pulsewidth from Fig. 4. Thus, for accurate singlephoton imaging with a 50ps timebinning SPAD array, we would shorten our pulse from 1.1 ns to 140 ps.
Discussion
We have proposed and demonstrated a SPADcamera imaging framework that generates highly accurate images of a scene’s 3D structure and reflectivity from ∼1 detected signal photon per pixel, despite the presence of extraneous detections at roughly the same rate from background light and dark counts. By explicitly modeling the limited singlephoton timetagging resolution of SPADarray imagers, our framework markedly improves reconstruction accuracy in this lowflux regime as compared with what is achieved with existing methods. The photon efficiency of our proposed framework is quantified in Fig. 5, where we have plotted the subbinduration depth error it incurs in imaging the mannequin and sunflower scene versus the average number of detected signal photons per pixel. For this task, our algorithm realizes centimetreclass depth resolution down to <1 detected signal photon per pixel, while the baseline pixelwise imager’s depth resolution is more than an order of magnitude worse because of its inability to cope with extraneous detections.
Because our framework employs a SPAD camera for highly photonefficient imaging, it opens up new ways to image 3D structure and reflectivity on very short time scales, while requiring very few photon detections. Hence, it could find widespread use in applications that require fast and accurate imaging using extremely small amounts of light, such as remote terrestrial mapping^{31}, seismic imaging^{32}, fluorescence profiling^{2} and astronomy^{33}. We emphasize, in this regard, that our framework affords automatic rejection of ambientlight and darkcount noise effects without requiring sophisticated timegating hardware. It follows that our imager could also enable rapid and noisetolerant 3D vision for selfnavigating advanced robotic systems, such as unmanned aerial vehicles and exploration rovers^{34}.
Methods
Reconstruction algorithm
Before initiating our threestep imaging algorithm, we first performed calibration measurements to: identify , the SPAD array’s set of hot pixels; obtain the average backgroundlight plus darkcount rates for the remaining pixels; and determine the laser pulse’s r.m.s. time duration (Supplementary Fig. 7a–c). For hotpixel identification, we placed the SPAD camera in a dark room, and identified pixels with darkcount rate >150 counts/s, since the standard pixel of our camera should have a darkcount rate of 100 counts/s. The backgroundlight plus darkcount rate at each pixel was identified by simply measuring the count rate, when the backgroundlight source was on but the laser was off. Finally, the laserpulse shape was calibrated by measuring the time histogram of 3,344 photon detections from a white calibration surface target that was placed ∼1 m away from the imaging setup. It turned out that: ∼2% of our camera’s 1,024 pixels were placed in ; the backgroundlight plus darkcount rates were indeed spatially varying across the remaining pixels; and the laser pulse’s time duration was T_{p}≈1 ns and reasonably approximated as a Gaussian.
We then proceed to step 1 of the reconstruction algorithm: we estimate reflectivity by combining the Poisson statistics of photon counts (Supplementary Fig. 8a) with a TVnorm smoothness constraint on the estimated reflectivity—while censoring the set of hot pixels—to write the optimization as a TVregularized, Poisson image inpainting problem. This optimization problem is convex in the reflectivity image variable A, which allows us to solve it in a computationally efficient manner with simple projected gradient methods^{35}. This step of the algorithm inputs a parameter τ_{A} that controls the degree of spatial smoothness of the final reflectivity image estimate. For a 384 × 384 image, the processing time of step 1 was ∼6 s on a standard laptop computer (Supplementary Note 1).
For step 2 of the reconstruction algorithm, we filtered the photondetection data set to impose the longitudinal constraint that the scene has a sparse set of reflectors. This is because the scaled detectiontime histogram hist(cT/2) that has been corrected for the average backgroundlight plus darkcount detections per bin is a proxy solution for hist(Z_{Δ}), where hist(Z_{Δ}) is a size–N_{z} histogram that bins the scene’s 3D structure at the camera’s cΔ/2 native range resolution. We used orthogonal matching pursuit^{36} on hist(cT/2), the coarsely binned histogram of photon detections, to find the nonzero spikes representing the object depth clusters. This step of the algorithm requires an integer parameter m that controls the number of depth clusters to be estimated; here we used m=2, but our simulations show insensitivity to overestimation of the best choice of m (Supplementary Fig. 8b). We then discarded photon detections that implied depth values more than cT_{p}/2 away from the estimated depth values, because they were presumably extraneous detections that are uniformly spread out during the acquisition time (Supplementary Fig. 8c). For a 384 × 384 image, the processing time of step 2 was ∼17 s on a standard laptop computer (Supplementary Note 1).
Having censored detections from all hot pixels and, through the longitudinal constraint, censored almost all extraneous detections on the remaining pixels, we treated all the uncensored photon detections as being from backreflected laser light, that is, that they were all signal photon detections. For step 3 of our reconstruction algorithm, we estimated the scene’s 3D structure using these uncensored photon detections. Because we operated in the lowflux regime, many of the pixels had no photon detections and thus are noninformative for 3D structure estimation. A robust 3D estimation algorithm must inpaint these missing pixels, using information derived from nearby pixels’ photondetection times. Approximating the laser’s pulse waveform s(t) by a Gaussian with r.m.s. duration T_{p}, we solved a TVregularized, Gaussian image inpainting problem to obtain our depth estimate . This is a convex optimization problem in the depth image variable Z, and projected gradient methods were used to generate in a computationally efficient manner. This step of the algorithm inputs a parameter τ_{Z} that controls the degree of spatial smoothness of the final depth image estimate. For a 384 × 384 image, the processing time of step 3 was ∼20 s on a standard laptop computer (Supplementary Note 1).
Code availability
The code used to generate the findings of this study is stored in the GitHub repository, github.com/photonefficientimaging/singlephotoncamera.
Data availability
The data and the code used to generate the findings of this study is stored in the GitHub repository, github.com/photonefficientimaging/singlephotoncamera. All other Supplementary Data are available from the authors upon request.
Additional information
How to cite this article: Shin, D. et al. Photonefficient imaging with a singlephoton camera. Nat. Commun. 7:12046 doi: 10.1038/ncomms12046 (2016).
References
 1.
Holst, G. C. CCD Arrays, Cameras, and Displays JCD Publishing (1998).
 2.
Chen, Y., Müller, J. D., So, P. T. & Gratton, E. The photon counting histogram in fluorescence fluctuation spectroscopy. Biophys. J. 77, 553–567 (1999).
 3.
McCarthy, A. et al. Longrange timeofflight scanning sensor based on highspeed timecorrelated singlephoton counting. Appl. Optics 48, 6241–6251 (2009).
 4.
Stettner, R. in SPIE Laser Radar Technology and Applications XV, 768405 (Bellingham, WA, USA, 2010).
 5.
May, S., Werner, B., Surmann, H. & Pervolz, K. in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 790–795 (Beijing, China, 2006).
 6.
Morris, P. A., Aspden, R. S., Bell, J. E., Boyd, R. W. & Padgett, M. J. Imaging with a small number of photons. Nat. Commun. 6, 5913 (2015).
 7.
Lee, J., Kim, Y., Lee, K., Lee, S. & Kim, S.W. Timeofflight measurement with femtosecond light pulses. Nat. Photon. 4, 716–720 (2010).
 8.
Schwarz, B. Lidar: Mapping the world in 3D. Nat. Photon. 4, 429–430 (2010).
 9.
Katz, O., Small, E. & Silberberg, Y. Looking around corners and through thin turbid layers in real time with scattered incoherent light. Nat. Photon. 6, 549–553 (2012).
 10.
Buzhan, P. et al. Silicon photomultiplier and its possible applications. Nucl. Instr. Meth. Phys. Res. Sect. A 504, 48–52 (2003).
 11.
Aull, B. F. et al. Geigermode avalanche photodiodes for threedimensional imaging. Lincoln Lab. J. 13, 335–349 (2002).
 12.
Gariepy, G. et al. Singlephoton sensitive lightinflight imaging. Nat. Commun. 6, 6021 (2015).
 13.
Kirmani, A. et al. Firstphoton imaging. Science 343, 58–61 (2014).
 14.
Becker, W. Advanced TimeCorrelated Single Photon Counting Techniques Springer (2005).
 15.
Richardson, J. et al. in 2009 IEEE Custom Integrated Circuits Conference, 77–80 (San Jose, California, USA, 2009).
 16.
Richardson, J., Grant, L. & Henderson, R. K. Low dark count singlephoton avalanche diode structure compatible with standard nanometer scale CMOS technology. IEEE Photonics Tech. Lett. 21, 1020–1022 (2009).
 17.
Veerappan, C. et al. in 2011 IEEE International SolidState Circuits Conference (ISSCC), 312–314 (San Jose, California, USA, 2011).
 18.
Villa, F. et al. CMOS imager with 1024 SPADs and TDCs for singlephoton timing and 3D timeofflight. IEEE J. Sel. Top. Quantum Electron. 20, 3804810 (2014).
 19.
Bronzi, D. et al. 100,000 frames/s 64 × 32 singlephoton detector array for 2D imaging and 3D ranging. IEEE J. Sel. Top. Quantum Electron. 20, 3804310 (2014).
 20.
Lussana, R. et al. Enhanced singlephoton timeofflight 3D ranging. Opt. Express 23, 24962–24973 (2015).
 21.
Bronzi, D. et al. Automotive threedimensional vision through a singlephoton counting SPAD camera. IEEE Trans. Intell. Transp. Syst. 17, 782–795 (2016).
 22.
Buller, G. S. & Wallace, A. M. Ranging and threedimensional imaging using timecorrelated singlephoton counting and pointbypoint acquisition. IEEE J. Sel. Top. Quantum Electron. 13, 1006–1015 (2007).
 23.
Li, D.U. et al. Realtime fluorescence lifetime imaging system with a 32 × 32 0.13 μm CMOS low darkcount singlephoton avalanche diode array. Opt. Express 18, 10257–10269 (2010).
 24.
Altmann, Y., Ren, X., McCarthy, A., Buller, G. S. & McLaughlin, S. Lidar waveform based analysis of depth images constructed using sparse singlephoton data. IEEE Trans. Image Process. 25, 1935–1946 (2016).
 25.
Shin, D., Kirmani, A., Goyal, V. K. & Shapiro, J. H. Photonefficient computational 3D and reflectivity imaging with singlephoton detectors. IEEE Trans. Comput. Imaging 1, 112–125 (2015).
 26.
Shin, D., Shapiro, J. H. & Goyal, V. K. Singlephoton depth imaging using a unionofsubspaces model. IEEE Signal Process. Lett. 22, 2254–2258 (2015).
 27.
Shin, D., Xu, F., Wong, F. N. C., Shapiro, J. H. & Goyal, V. K. Computational multidepth singlephoton imaging. Opt. Express 24, 1873–1888 (2016).
 28.
Snyder, D. L. Random Point Processes Wiley (1975).
 29.
Besag, J. On the statistical analysis of dirty pictures. J. Roy. Statist. Soc. Ser. B 48, 259–302 (1986).
 30.
Chambolle, A., Caselles, V., Cremers, D., Novaga, M. & Pock, T. in Theoretical Foundations and Numerical Methods for Sparse Recovery (ed. Fornasier, M.) Ch. 9, 263–340 (Walter de Gruyter, 2010).
 31.
Côté, J.F., Widlowski, J.L., Fournier, R. A. & Verstraete, M. M. The structural and radiative consistency of threedimensional tree reconstructions from terrestrial lidar. Remote Sensing Environ. 113, 1067–1081 (2009).
 32.
Haugerud, R. A. et al. Highresolution lidar topography of the Puget Lowland, Washington. GSA Today 13, 4–10 (2003).
 33.
Gatley, I., DePoy, D. & Fowler, A. Astronomical imaging with infrared array detectors. Science 242, 1264–1270 (1988).
 34.
Watts, A. C., Ambrosia, V. G. & Hinkley, E. A. Unmanned aircraft systems in remote sensing and scientific research: Classification and considerations of use. Remote Sens. 4, 1671–1692 (2012).
 35.
Harmany, Z. T., Marcia, R. F. & Willett, R. M. This is SPIRALTAP: sparse Poisson intensity reconstruction algorithms—theory and practice. IEEE Trans. Image Process. 21, 1084–1096 (2012).
 36.
Pati, Y. C., Rezaiifar, R. & Krishnaprasad, P. S. in 1993 Conference Record of The TwentySeventh Asilomar Conference on Signals, Systems and Computers, Vol. 1, 40–44 (Pacific Grove, California, USA, 1993).
Acknowledgements
The development of the reconstruction algorithm and performance of the computational imaging experiments were supported in part by the US National Science Foundation under grant nos 1161413 and 1422034, the Massachusetts Institute of Technology Lincoln Laboratory Advanced Concepts Committee and a Samsung Scholarship. The development of the 3D SPAD camera was supported by the ‘MiSPiA’ project, under the EC FP7ICT framework, G.A. No. 257646.
Author information
Author notes
 Dongeek Shin
 & Feihu Xu
These authors contribute equally to this work.
Affiliations
Research Laboratory of Electronics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139, USA
 Dongeek Shin
 , Feihu Xu
 , Dheera Venkatraman
 , Franco N. C. Wong
 & Jeffrey H. Shapiro
Dip. Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci, 32, Milano I20133, Italy
 Rudi Lussana
 , Federica Villa
 & Franco Zappa
Department of Electrical and Computer Engineering, Boston University, 1 Silber Way, Boston, Massachusetts 02215, USA
 Vivek K. Goyal
Authors
Search for Dongeek Shin in:
Search for Feihu Xu in:
Search for Dheera Venkatraman in:
Search for Rudi Lussana in:
Search for Federica Villa in:
Search for Franco Zappa in:
Search for Vivek K. Goyal in:
Search for Franco N. C. Wong in:
Search for Jeffrey H. Shapiro in:
Contributions
D.S. performed the data analysis, developed and implemented the computational reconstruction algorithm; F.X. and D.V. developed the experimental setup, performed data acquisition and analysis; R.L., F.V. and F.Z. developed and assisted with the SPAD array equipment; D.S., F.X., D.V., V.K.G., F.N.C.W. and J.H.S. discussed the experimental design; and V.K.G., F.N.C.W. and J.H.S. supervised and planned the project. All authors contributed to writing the manuscript.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Jeffrey H. Shapiro.
Supplementary information
PDF files
 1.
Supplementary Information
Supplementary Figures 18, Supplementary Notes 12, Supplementary Methods and Supplementary References.
Videos
 1.
Supplementary Movie 1
This movie illustrates our proposed threestep algorithm. From noisy SPADcamera data, we recover the depth and reflectivity of the mannequinplussunflower scene in three steps: (1) estimate the scene reflectivity by solving a regularized optimization problem; (2) censor extraneous photon detections by solving a sparse deconvolution problem; (3) estimate the scene depth by solving a regularized optimization problem. The movie shows the reconstruction process described in Figure 2.
 2.
Supplementary Movie 2
This movie illustrates the photon efficiency of our proposed algorithm as compared to the conventional filteredhistogram method. With decreases in the number of detected photons, the proposed method substantially outperforms the conventional method in both reflectivity and depth reconstructions.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Further reading

1.
Confocal nonlineofsight imaging based on the lightcone transform
Nature (2018)

2.
Nature Photonics (2016)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.