## Abstract

The wave nature of light imposes limits on the resolution of optical imaging systems. For over a century, the Abbe-Rayleigh criterion has been utilized to assess the spatial resolution limits of imaging instruments. Recently, there has been interest in using spatial projective measurements to enhance the resolution of imaging systems. Unfortunately, these schemes require a priori information regarding the coherence properties of “unknown” light beams and impose stringent alignment conditions. Here, we introduce a smart quantum camera for superresolving imaging that exploits the self-learning features of artificial intelligence to identify the statistical fluctuations of unknown mixtures of light sources at each pixel. This is achieved through a universal quantum model that enables the design of artificial neural networks for the identification of photon fluctuations. Our protocol overcomes limitations of existing superresolution schemes based on spatial mode projections, and consequently provides alternative methods for microscopy, remote sensing, and astronomy.

## Introduction

The spatial resolution of optical imaging systems is established by the diffraction of photons and the noise associated with their quantum fluctuations^{1,2,3,4,5}. For over a century, the Abbe-Rayleigh criterion has been used to assess the diffraction-limited resolution of optical instruments^{3,6}. At a more fundamental level, the ultimate resolution of optical instruments is established by the laws of quantum physics through the Heisenberg uncertainty principle^{7,8,9}. In classical optics, the Abbe-Rayleigh resolution criterion stipulates that an imaging system cannot resolve spatial features smaller than *λ*/2NA. In this case, *λ* represents the wavelength of the illumination field, and NA describes numerical aperture of the optical instrument^{1,2,3,10}. Given the implications that overcoming the Abbe-Rayleigh resolution limit has for multiple applications, such as, microscopy, remote sensing, and astronomy^{3,10,11,12}, there has been an enormous interest in improving the spatial resolution of optical systems^{13,14,15}. So far, optical superresolution has been achieved through the decomposition of spatial modes into suitable transverse modes of light^{14,16,17}. These conventional schemes rely on spatial projective measurements to pick up phase information that is used to boost spatial resolution of optical instruments^{14,18,19,20,21,22}.

For almost a century, the importance of phase over amplitude information has constituted established knowledge for optical engineers^{3,4,5}. Recently, this idea has been extensively investigated in the context of quantum metrology^{5,23,24,25,26}. More specifically, it has been demonstrated that phase information can be used to surpass the Abbe-Rayleigh resolution limit for the spatial identification of light sources^{13,18,19,20,27}. For example, phase information can be obtained through mode decomposition by using projective measurements or demultiplexing of spatial modes^{14,17,18,19,20}. Naturally, these approaches require a priori information regarding the coherence properties of the, in principle, ‘unknown’ light sources^{14,15,21,22}. Furthermore, these techniques impose stringent requirements on the alignment and centering conditions of imaging systems^{14,15,17,18,19,20,21,22,28,29}. Despite these limitations, most, if not all, the current experimental protocols have relied on spatial projections and demultiplexing in the Hermite-Gaussian, Laguerre-Gaussian, and parity basis^{14,17,18,19,20,21,22}.

The quantum statistical fluctuations of photons establish the nature of light sources^{30,31,32,33,34}. As such, these fundamental properties are not affected by the spatial resolution of an optical instrument^{34}. Here, we demonstrate that measurements of the quantum statistical properties of a light field enable imaging beyond the Abbe-Rayleigh resolution limit. This is performed by exploiting the self-learning features of artificial intelligence to identify the statistical fluctuations of photon mixtures^{33}. More specifically, we demonstrate a smart quantum camera with the capability to identify photon statistics at each pixel. For this purpose, we introduce a general quantum model that describes the photon statistics produced by the scattering of an arbitrary number of light sources. This model is used to design and train artificial neural networks for the identification of light sources. Remarkably, our scheme enables us to overcome inherent limitations of existing superresolution protocols based on spatial mode projections and multiplexing^{14,17,18,19,20,21,22}.

## Results

### Concept and theory

The conceptual schematic behind our experiment is depicted in Fig. 1a. This camera utilizes an artificial neural network to identify the photon statistics of each point source that constitutes a target object. The description of the photon statistics produced by the scattering of an arbitrary number of light sources is achieved through a general model that relies on the quantum theory of optical coherence introduced by Sudarshan and Glauber^{34,35,36}. We use this model to design and train a neural network capable of identifying light sources at each pixel of our camera. This unique feature is achieved by performing photon-number-resolving detection^{33}. The sensitivity of this camera is limited by the photon fluctuations, as stipulated by the Heisenberg uncertainty principle, and not by the Abbe-Rayleigh resolution limit^{5,34}.

In general, realistic imaging instruments deal with the detection of multiple light sources. These sources can be either distinguishable or indistinguishable^{3,34}. The combination of indistinguishable sources can be represented by either coherent or incoherent superpositions of light sources characterized by Poissonian (coherent) or super-Poissonian (thermal) statistics^{34}. In our model, we first consider the indistinguishable detection of *N* coherent and *M* thermal sources. For this purpose, we make use of the P-function *P*_{coh}(*γ*) = *δ*^{2}(*γ* − *α*_{k}) to model the contributions from the *k*th coherent source with the corresponding complex amplitude *α*_{k}^{35,36}. The total complex amplitude associated to the superposition of an arbitrary number of light sources is given by \({\alpha }_{{{{\rm{tot}}}}}=\mathop{\sum }\nolimits_{k = 1}^{N}{\alpha }_{k}\). In addition, the P-function for the *l*th thermal source, with the corresponding mean photon numbers \({\bar{m}}_{l}\), is defined as \({P}_{{{{\rm{th}}}}}(\gamma )={(\pi {\bar{m}}_{l})}^{-1}\exp (-| \gamma {| }^{2}/{\bar{m}}_{l})\). The total number of photons attributed to the *M* number of thermal sources is defined as \({m}_{{{{\rm{tot}}}}}=\mathop{\sum }\nolimits_{l = 1}^{M}{\bar{m}}_{l}\). These quantities allow us to calculate the P-function for the multisource system as

This approach enables the analytical description of the photon-number distribution *p*_{th-coh}(*n*) associated to the detection of an arbitrary number of indistinguishable light sources. This is calculated as \({p}_{{{{\rm{th-coh}}}}}(n)=\left\langle n\right|{\hat{\rho }}_{{{{\rm{th-coh}}}}}\left|n\right\rangle\), where \({\rho }_{{{{\rm{th-coh}}}}}=\int {P}_{{{{\rm{th-coh}}}}}(\gamma )\left|\gamma \right\rangle \left\langle \gamma \right|{d}^{2}\gamma\). After algebraic manipulation (see Supplementary Information), we obtain the following photon-number distribution

where Γ(*z*) and _{1}*F*_{1}(*a*; *b*; *z*) are the Euler gamma and the Kummer confluent hypergeometric functions, respectively. This probability function enables the general description of the photon statistics produced by any indistinguishable combination of light sources. Thus, the photon distribution produced by the distinguishable detection of *N* light sources can be simply obtained by performing a discrete convolution of Eq. (2) as

The combination of Eqs. (2) and (3) allows the classification of photon-number distributions for any combination of light sources.

### Experiment

We demonstrate our proof-of-principle quantum camera using the experimental setup shown in Fig. 1b. For this purpose, we use a continuous-wave laser at 633 nm to produce either coherent, or incoherent superpositions of distinguishable, indistinguishable, or partially distinguishable light sources. In this case, the combination of photon sources, with tunable statistical fluctuations, acts as our target object. Then, we image our target object onto a digital micro-mirror device (DMD) that is used to implement raster scanning. This is implemented by selectively turning on and off groups of pixels in our DMD. The light reflected off the DMD is measured by a single-photon detector that allows us to perform photon-number-resolving detection. This is implemented through the technique described in ref. ^{33}.

The equations above allow us to implement a multi-layer feed-forward network for the identification of the quantum photon fluctuations of the point sources of a target object. The structure of the network consists of a group of interconnected neurons arranged in layers. Here, the information flows only in one direction, from input to output^{37,38}. As indicated in Fig. 2a, our network comprises two layers, with ten sigmoid neurons in the hidden layer (green neurons) and five softmax neurons in the output layer (orange neurons). In this case, the input features represent the probabilities of detecting *n* photons at a specific pixel, *p*(*n*), whereas the neurons in the last layer correspond to the classes to be identified. The input vector is then defined by twenty-one features corresponding to *n* = 0, 1, ..., 20. In our experiment, we define five classes that we label as: coherent-thermal (CT), thermal-thermal (TT), coherent-thermal-thermal (CTT), coherent (C), and thermal (T). If the brightness of the experiment remains constant, these classes can be directly defined through the photon-number distribution described by Eqs. (2) and (3). However, if the brightness of the sources is modified, the classes can be defined through the \({g}^{(2)}=1+(\langle {({{\Delta }}\hat{n})}^{2}\rangle -\langle \hat{n}\rangle )/{\langle \hat{n}\rangle }^{2}\), which is intensity-independent^{30,33}. The parameters in the *g*^{(2)} function can also be calculated from Eqs. (2) and (3). It is important to mention that the output neurons provide a probability distribution over the predicted classes^{39,40}. Moreover, note that during the training stage we need to define the output classes depending on the possible combination of light sources to be identified at the detection plane. Since our method is based on the discrimination of photon statistics, any point in the detection plane will fall within the defined classes, regardless of the position of the sources. Therefore the spatial distribution of the sources is not required in the training process. The training details of our neural networks can be found in the Methods section.

We test the performance of our neural network through the classification of a complex mixture of photons produced by the combination of one coherent with two thermal light sources. The accuracy of our trained neural network is reported in Fig. 2b. In our setup, the three partially overlapping sources form five classes of light with different mean photon numbers and photon statistics. We exploit the functionality of our artificial neural network to identify the underlying quantum fluctuations that characterize each kind of light. We calculate the accuracy as the ratio of true positive and true negative to the total of input samples during the testing phase. Figure 2b shows the overall accuracy as a function of the number of data points used to build the probability distributions for the identification of the multiple light sources using a supervised neural network. The classification accuracy for the mixture of three light sources is 80% with 100 photon-number-resolving measurements. The performance of the neural networks increases to approximately 95% when we use 3500 data points to generate probability distributions.

The performance of our protocol for light identification can be understood through the distribution of light sources in the probability space shown in Fig. 3. Here we show the projection of the feature space on the plane defined by the probabilities *p*(0), *p*(1), and *p*(2) for different number of data points. Each point is obtained from an experimental probability distribution. As illustrated in Fig. 3a, the distributions associated to the multiple sources obtained for 10 data points are confined to a small region of the feature space. This condition makes extremely hard the identification of light sources with 10 sets of measurements. A similar situation can be observed for the distribution in Fig. 3b that was generated using 100 data points. As shown in panel Fig. 3c, the separations in the distributions produced with 1000 data points occupy different regions, although brown and black points keep closely intertwined. These conditions enable one to identify multiple light sources. Finally, the separated distributions obtained with 10,000 data points in Fig. 3d enable efficient identification of light sources. These probability space diagrams explain the performances reported in Fig. 2. An interesting feature of Fig. 3 is the fact that the distributions in the probability space are linearly separable.

As demonstrated in Fig. 4, the identification of the quantum photon fluctuations at each pixel of our camera enables us to demonstrate superresolving imaging. Our technique involves two main steps. First, we classify each pixel with the help of our neural network (see Fig. 2). Then, we use this information to perform a fitting procedure to find out the positions and sizes of each source (see Methods). In our experiment we prepared each source to have a mean photon number between 1 and 1.5 for the brightest pixel. The raster-scan image of a target object composed of multiple partially distinguishable sources in Fig. 4a illustrates the performance of conventional imaging protocols limited by diffraction^{4,6,7,8}. In this case, it is practically impossible to identify the multiple sources that constitute the target object. Remarkably, as shown in Fig. 4b, our protocol provides a dramatic improvement of the spatial resolution of the imaging system. In this case, we utilize photon statistics of the complex mixture of light sources at each pixel rather than relying on the composite point spread function of the multiple sources. This allows us to surpass the diffraction limit and predict the location of the three point sources. We then use a genetic algorithm-based optimization to predict the actual centroids and diameters of the three point sources with Gaussian point spread functions. Finally, we use this information to reconstruct each of the sources. Then, we simply add all individual source profiles to produce a single intensity plot as described in the Methods section. Our results clearly show the presence of the three emitters that form the remote object. The estimation of separations among light sources is performed through a fit over the classified pixel-by-pixel image. Additional details can be found in the Methods section. In Fig. 4c, d, we demonstrate the robustness of our protocol by performing superresolving imaging for a different configuration of light sources. In this case, two small sources are located inside the point-spread function of a third light source. As shown in Fig. 4c, the Abbe-Rayleigh limit forbids the identification of light sources. However, we demonstrate substantial improvement of spatial resolution in Fig. 4d. The plots in Fig. 4e, f correspond to the inferred spatial distributions based on the experimental pixel-by-pixel imaging used to produce Fig. 4b, d. The insets in Fig. 4e, f show photon-number probability distributions for three pixels. Sharing similarities with conventional schemes for optical superresolution^{14,18,19,20,21,22}, our technique enables performing imaging beyond the Abbe-Rayleigh criterion even when the detected photons are emitted by light sources of the same kind. As shown in Fig. 4, this is possible even if two thermal sources are detected. The theoretical photon-number distributions in Fig. 4e, f are obtained through a procedure of least square regression^{41}. Here the least squares difference between the measured and theoretical probability distribution was minimized for 0 ≤ *n* ≤ 6. The sources were assumed to be partially distinguishable allowing the theoretical distribution to be defined by Eqs. (2) and (3). The combined mean photon numbers of each source generated for the fit totals the measured mean photon number (see Methods section). Our scheme enables the use of the photon-number distributions or their corresponding *g*^{(2)} to characterize light sources. This allows us to determine each pixel’s corresponding statistics, regardless of the mean photon numbers of the sources in the detected field^{30,33}.

We now provide a quantitative characterization of our superresolving imaging scheme based on the identification of photon statistics. We demonstrate that our smart camera for superresolving imaging can capture small spatial features that surpass the resolution capabilities of conventional schemes for direct imaging^{1,2,3,4,5}. Consequently, as shown in Fig. 5, our camera enables the possibility of performing imaging beyond the Abbe-Rayleigh criterion. In this case, we performed multiple experiments in which a superposition of partially distinguishable sources were imaged. The superposition was prepared using one coherent and one thermal light source which are separated from 0 to ~2.55 mm. In Fig. 5a, we plot the predicted transverse separation *s* normalized by the Gaussian beam waist radius *w*_{0} for both protocols. Here *w*_{0} = *λ*/*π*NA ≈ 1.2 mm, this parameter is directly obtained from our experiment. Furthermore, the transverse separation *s* is calculated following a similar approach to the one used to obtain Fig. 4 (see Methods). As demonstrated in Fig. 5a, our protocol enables one to resolve spatial features for sources with small separations even for diffraction-limited conditions. As expected for larger separation distances, the performance of our protocol matches the accuracy of intensity measurements. This is further demonstrated by the spatial profiles shown from Fig. 5b to d. The first row shows spatial profiles for three experimental points in Fig. 5a obtained through direct imaging whereas the images in the second row were obtained using our scheme for superresolving imaging. The spatial profiles in Fig. 5b show that both imaging techniques lead to comparable resolutions and the correct identification of the centroids of the two sources. However, as shown in Fig. 5c, d, our camera outperforms direct imaging when the separations decrease. Here, the actual separation is smaller than *w*_{0}/2 for both cases. It is worth noticing that in this case, direct imaging cannot resolve spatial features of the sources. Here, the predictions of direct imaging become unstable and erratic. Remarkably, our simulations show an excellent agreement with the experimental data obtained with our scheme for superresolving imaging (see Methods section).

## Discussions

It is worth noting that sources of light characterized by different quantum statistical properties are ubiquitous in realistic scenarios^{42}. Indeed, this situation prevails even when the detected field results from a combination of light sources of the same kind. For example, the finite size of remote stars produces multimode thermal light that is characterized by a degree of second-order coherence *g*^{(2)} that deviates from 2^{43}. Interestingly, our scheme can identify these conditions. Furthermore, smart quantum statistical imaging can have important implications for LIDAR applications^{44,45}. The performance of rangefinder systems is limited by the ability to discriminate the photon statistics of coherent and thermal light^{44,45}. Remarkably, our imaging protocol shows potential to overcome this problem. Finally, there has been interest in using optical microscopy to identify light emitters^{42}. In this case, it is possible to use our technique to form superresolving images of the optical emitters. Consequently, our work has important implications for the many imaging techniques.

In conclusion, we demonstrated a robust quantum camera that enables superresolving imaging beyond the Abbe-Rayleigh resolution limit. Our scheme for quantum statistical imaging exploits the self-learning features of artificial intelligence to identify the statistical fluctuations of truly unknown mixtures of light sources. This particular feature of our scheme relies on a general model based on the theory of quantum coherence to describe the photon statistics produced by the scattering of an arbitrary number of light sources. While in terms of resolution, the performance of our camera is on par with conventional schemes for superresolution, we demonstrated that the measurement of the quantum statistical fluctuations of photons enables one to overcome inherent limitations of existing superresolution protocols based on spatial mode projections^{14,18,19,20,21,22}. Specifically, our protocol does not require prior information about the coherence properties of the light field. In addition, it does not rely on information of the individual centroids of the sources. We believe that our work will establish a paradigm in the field of optical imaging with important implications for microscopy, remote sensing, and astronomy^{5,6,7,8,9,10,11}.

## Methods

### Training of NN

For the sake of simplicity, we split the functionality of our neural network into two phases: the training and testing phase. In the first phase, the training data is fed to the network multiple times to optimize the synaptic weights through a scaled conjugate gradient back-propagation algorithm^{46}. This optimization seeks to minimize the Kullback-Leibler divergence distance between predicted and the real target classes^{47,48}. Following a standardized ratio for statistical learning, we divide our data into training (70%), validation (15%), and testing (15%) sets^{49}. The training is stopped if the algorithm performance stops improving on the validation set or if the loss function does not decrease within a given number of training epochs^{50}. This method is called early stopping method and allows for reducing effectively overfitting^{51}. Specifically, a limit of 1000 epochs was set for the examples shown in Figs. 4 and 5. In the test phase, we assess the performance of the algorithm by introducing an unknown set of data during the training process. The goal is to estimate the accuracy of the neural network for unknown data by exploiting the information unveiled during the training stage. For both phases, we prepare a data-set consisting of the same number of observations for each output class. The output classes are defined by the possible combinations of the number and type of light sources at the detection plane. For example, in our experiment presented in Fig. 4, we consider three sources, two thermal and one coherent that lead to five classes, thermal, coherent, thermal-thermal, thermal-coherent, and thermal-thermal-coherent. For each of the output classes, we prepared one thousand experimental measurements of photon statistics, for both training and test stages. Note that we train multiple neural networks by considering different numbers of data points for producing the photon statistics (see Fig. 2b). In all cases, we keep invariant the size of the training and test data-set. The networks were trained using the neural network toolbox in MATLAB, which runs on a computer Intel Core i7-4710MQ CPU (@2.50GHz) with 32GB of RAM.

### Fittings

To determine the optimal fits for Fig. 4e, f we design a search space based on Eqs. (2) and (3). To do so we first found the mean photon number of the input pixel, which are later applied to constrain the search space. From here we allowed for the existence of up to three distinguishable modes which will be combined according to Eq. (3). Each of the modes contains an indistinguishable combination of up to one coherent and two thermal sources whose number distribution is given by Eq. (2). The total combination results in partially distinguishable combination and provides the theoretical model for our experiment. From here our search space is

where \({\bar{n}}_{i,t}\) and \({\bar{n}}_{c}\) are the mean photon numbers of that each thermal or coherent source that contributes to each distinguishable mode respectively. The mean photon numbers of each source must add up to the experimental mean photon number, constraining the search. A linear search was then performed over the predicted mean photon numbers and the minimum was returned, providing the optimal fit. Finally, we note that the fitting procedure does not use more information than that provided through the classification. The fitting procedure solely uses the output from the classification algorithm.

### Simulation of the experiment and separation estimation protocol

To demonstrate a consistent improvement over traditional methods, we also simulated the experiment using two beams, a thermal and a coherent, with Gaussian point spread functions over a 128 × 128 grid of pixels. At each pixel, the mean photon number for each source is provided by the Gaussian point spread function, which is then used to create the appropriate distinguishable probability distribution as given in Eq. (3), creating a 128 × 128 grid of photon number distributions. We use Gaussian fittings because this mathematical function describes the most fundamental spatial mode of an optical system. The associated class data for these distributions are then be fitted using a set of pre-labeled disks using a genetic algorithm. This recreates our method in the limits of perfect classification. Each of these distributions is then used to simulate photon-number resolving detection. This data is then used to create a normalized intensity for the classical fit. We fit the image to a combination of Gaussian PSFs. The separation s, is found by taking the centroid of each fit and calculating the distance between them. This value is then normalized by the beam radius *ω*_{0} for sake of clarity. This process is repeated ten times for each separation in order to average out fluctuations in the fitting. When combining the results of the intensity fits they are first divided into two sets. One set has the majority of fits return a single Gaussian, while the other returned two Gaussian the majority of the time. The set identified as only containing a single Gaussian is then set at the Abbe-Rayleigh diffraction limit, while the remaining data is used in a linear fit. This causes the sharp transition between the two sets of data.

## Data availability

The data sets generated and/or analyzed during this study are available from GitHub repository at https://tinyurl.com/33kbjusf.

## Code availability

The code used to analyze the data and the related simulation files are available from GitHub repository at https://tinyurl.com/33kbjusf.

## References

Abbe, E. Beiträge zur theorie des mikroskops und der mikroskopischen wahrnehmung.

*Arch. f.ür. mikroskopische Anat.***9**, 413–468 (1873).Rayleigh, L. Xxxi. investigations in optics, with special reference to the spectroscope.

*Lond. Edinb. Dublin Philos. Mag. J. Sci.***8**, 261–274 (1879).Born, M. & Wolf, E.

*Principles of optics: electromagnetic theory of propagation, interference and diffraction of light*(Elsevier, 2013).Goodman, J. W.

*Introduction to Fourier optics*(Roberts and Company Publishers, 2005).Magaña-Loaiza, O. S. & Boyd, R. W. Quantum imaging and information.

*Rep. Prog. Phys.***82**, 124401 (2019).Won, R. Eyes on super-resolution.

*Nat. Photonics***3**, 368–369 (2009).Stelzer, E. H. K. Beyond the diffraction limit?

*Nature***417**, 806–807 (2002).Kolobov, M. I. & Fabre, C. Quantum limits on optical resolution.

*Phys. Rev. Lett.***85**, 3789–3792 (2000).Stelzer, E. H. K. & Grill, S. The uncertainty principle applied to estimate focal spot dimensions.

*Opt. Commun.***173**, 51–56 (2000).Beyond the diffraction limit.

*Nat. Photonics***3**, 361 (2009). https://doi.org/10.1038/nphoton.2009.100.Pirandola, S., Bardhan, B. R., Gehring, T., Weedbrook, C. & Lloyd, S. Advances in photonic quantum sensing.

*Nat. Photonics***12**, 724–733 (2018).Hell, S. W. et al. The 2015 super-resolution microscopy roadmap.

*J. Phys. D Appl. Phys.***48**, 443001 (2015).Tsang, M. Quantum imaging beyond the diffraction limit by optical centroid measurements.

*Phys. Rev. Lett.***102**, 253601 (2009).Tsang, M., Nair, R. & Lu, X.-M. Quantum theory of superresolution for two incoherent optical point sources.

*Phys. Rev. X***6**, 031033 (2016).Hell, S. W. & Wichmann, J. Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy.

*Opt. Lett.***19**, 780–782 (1994).Paúr, M. et al. Tempering Rayleigh’s curse with PSF shaping.

*Optica***5**, 1177–1180 (2018).Tamburini, F., Anzolin, G., Umbriaco, G., Bianchini, A. & Barbieri, C. Overcoming the Rayleigh criterion limit with optical vortices.

*Phys. Rev. Lett.***97**, 163903 (2006).Tham, W., Ferretti, H. & Steinberg, A. M. Beating rayleigh’s curse by imaging using phase information.

*Phys. Rev. Lett.***118**, 070801 (2017).Zhou, Y. et al. Quantum-limited estimation of the axial separation of two incoherent point sources.

*Optica***6**, 534–541 (2019).Boucher, P., Fabre, C., Labroille, G. & Treps, N. Spatial optical mode demultiplexing as a practical tool for optimal transverse distance estimation.

*Optica***7**, 1621–1626 (2020).Larson, W. & Saleh, B. E. A. Resurgence of Rayleigh’s curse in the presence of partial coherence.

*Optica***5**, 1382–1389 (2018).Liang, K., Wadood, S. A. & Vamivakas, A. N. Coherence effects on estimating two-point separation.

*Optica***8**, 243–248 (2021).Boto, A. N. et al. Quantum interferometric optical lithography: exploiting entanglement to beat the diffraction limit.

*Phys. Rev. Lett.***85**, 2733–2736 (2000).Tang, Z. S., Durak, K. & Ling, A. Fault-tolerant and finite-error localization for point emitters within the diffraction limit.

*Opt. Express***24**, 22004–22012 (2016).Parniak, M. et al. Beating the Rayleigh limit using two-photon interference.

*Phys. Rev. Lett.***121**, 250503 (2018).You, C. et al. Scalable multiphoton quantum metrology with neither pre- nor post-selected measurements.

*Appl. Phys. Rev.***8**, 041406 (2021).Giovannetti, V., Lloyd, S., Maccone, L. & Shapiro, J. H. Sub-Rayleigh-diffraction-bound quantum imaging.

*Phys. Rev. A***79**, 013827 (2009).Magaña-Loaiza, O. S., Mirhosseini, M., Cross, R. M., Rafsanjani, S. M. H. & Boyd, R. W. Hanbury brown and twiss interferometry with twisted light.

*Sci. Adv.***2**, e1501143 (2016).Yang, Z. et al. Digital spiral object identification using random light.

*Light Sci. Appl.***6**, e17013- e17013 (2017).You, C. et al. Observation of the modification of quantum statistics of plasmonic systems.

*Nat. Commun.***12**, 5161 (2021).Mandel, L. Sub-poissonian photon statistics in resonance fluorescence.

*Opt. Lett.***4**, 205–207 (1979).Magaña-Loaiza, O. S. et al. Multiphoton quantum-state engineering using conditional measurements.

*npj Quantum Inf.***5**, 80 (2019).You, C. et al. Identification of light sources using machine learning.

*Appl. Phys. Rev.***7**, 021404 (2020).Gerry, C., Knight, P. & Knight, P. L.

*Introductory quantum optics*(Cambridge University Press, 2005).Sudarshan, E. C. G. Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams.

*Phys. Rev. Lett.***10**, 277–279 (1963).Glauber, R. J. The quantum theory of optical coherence.

*Phys. Rev.***130**, 2529–2539 (1963).Svozil, D., Kvasnicka, V. & Pospichal, J. Introduction to multi-layer feed-forward neural networks.

*Chemom. Intell. Lab. Syst.***39**, 43–62 (1997).Bhusal, N. et al. Spatial mode correction of single photons using machine learning.

*Adv. Quantum Technol.***4**, 2000103 (2021).Goodfellow, I., Bengio, Y., Courville, A. & Bengio, Y.

*Deep learning*, vol. 1 (MIT press Cambridge, 2016).Bishop, C. M.

*Pattern recognition and machine learning*(Springer, 2006).Massaron, L. & Boschetti, A.

*Regression analysis with Python*(Packt Publishing Ltd, 2016).Polino, E., Valeri, M., Spagnolo, N. & Sciarrino, F. Photonic quantum metrology.

*AVS Quantum Sci.***2**, 024703 (2020).Burenkov, I. A. et al. Full statistical mode reconstruction of a light field via a photon-number-resolved measurement.

*Phys. Rev. A***95**, 053806 (2017).Cohen, L. et al. Thresholded quantum lidar: exploiting photon-number-resolving detection.

*Phys. Rev. Lett.***123**, 203601 (2019).Habif, J. L., Jagannathan, A., Gartenstein, S., Amory, P. & Guha, S. Quantum-limited discrimination of laser light and thermal light.

*Opt. Express***29**, 7418–7427 (2021).Møller, M. F. A scaled conjugate gradient algorithm for fast supervised learning.

*Neural Netw.***6**, 525–533 (1993).Kullback, S. & Leibler, R. A. On information and sufficiency.

*Ann. Math. Stat.***22**, 79–86 (1951).Kullback, S.

*Information theory and statistics*(Courier Corporation,1997).Crowther, P. S. & Cox, R. J. A method for optimal division of data sets for use in neural networks. In

*Knowledge-Based Intelligent Information and Engineering Systems*(Springer Berlin Heidelberg, 2005).Prechelt, L. Early stopping - but when? In

*Neural networks: tricks of the trade*(Springer Berlin Heidelberg, 1998). https://doi.org/10.1007/3-540-49430-8_3.Yao, Y., Rosasco, L. & Caponnetto, A. On early stopping in gradient descent learning.

*Constr. Approx.***26**, 289–315 (2007).

## Acknowledgements

N.B., M.H., C.Y., and O.S.M.L. acknowledge support from the Army Research Office (ARO) under the grant no. W911NF-20-1-0194, the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0021069, and the National Science Foundation through Grant No. ECCS-2225986. A.M. thanks support from Department of Physics & Astronomy of Louisiana State University. R.J.L.M. thankfully acknowledges financial support by CONACyT under the project CB-2016-01/284372, and by DGAPA-UNAM, under the project UNAM-PAPIIT IN102920.

## Author information

### Authors and Affiliations

### Contributions

N.B., A.M., and M.H. contributed equally. The experiment was designed by C.Y., N.B., M.H. and O.S.M.L. The theoretical description was developed by A.M., R.J.L.M., M.A.Q.J., O.S.M.L., and C.Y. The experiment was performed by M.H., N.B., and C.Y. The data was analyzed by N.B., M.H., A.M., M.A.Q.J., and C.Y. The idea was conceived by O.S.M.L. and C.Y. The project was supervised by O.S.M.L. All authors approved the final version of the manuscript.

### Corresponding author

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Additional information

**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Supplementary information

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Bhusal, N., Hong, M., Miller, A. *et al.* Smart quantum statistical imaging beyond the Abbe-Rayleigh criterion.
*npj Quantum Inf* **8**, 83 (2022). https://doi.org/10.1038/s41534-022-00593-5

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41534-022-00593-5