## Abstract

Pixelation occurs in many imaging systems and limits the spatial resolution of the acquired images. This effect is notably present in quantum imaging experiments with correlated photons in which the number of pixels used to detect coincidences is often limited by the sensor technology or the acquisition speed. Here, we introduce a pixel super-resolution technique based on measuring the full spatially-resolved joint probability distribution (JPD) of spatially-entangled photons. Without shifting optical elements or using prior information, our technique increases the pixel resolution of the imaging system by a factor two and enables retrieval of spatial information lost due to undersampling. We demonstrate its use in various quantum imaging protocols using photon pairs, including quantum illumination, entanglement-enabled quantum holography, and in a full-field version of N00N-state quantum holography. The JPD pixel super-resolution technique can benefit any full-field imaging system limited by the sensor spatial resolution, including all already established and future photon-correlation-based quantum imaging schemes, bringing these techniques closer to real-world applications.

## Introduction

The acquisition of a high-resolution image over a large field of view is essential in most optical imaging applications. In this respect, the widespread development of digital cameras made of millions of pixels has strongly contributed to create imaging systems with large space-bandwidth products. In classical imaging, it is therefore mainly the imaging systems with strong spatial constraints that suffer from pixelation and undersampling, such as lensless on-chip microscopes^{1,2}. However, this effect is very present in quantum imaging schemes where it severely hinders the progress of these techniques towards practical applications.

Quantum imaging systems harness quantum properties of light and their interaction with the environment to go beyond the limit of classical imaging or to implement unique imaging modalities^{3}. Of the many approaches, imaging schemes based on entangled photon pairs are the most common and are among the most promising. Proof-of-principle demonstrations range from improving optical resolution^{4,5} and imaging sensitivity^{6,7,8,9} to the creation of new imaging protocols, such as ghost imaging^{10,11}, quantum illumination^{12,13,14,15} and quantum holography^{16,17}. Contrary to classical imaging, photon-correlation-based imaging systems operate by measuring photon coincidences between many spatial positions of the image plane in parallel (except from induced-coherence imaging approaches^{18,19,20}). In practice, this process is much more delicate than forming an intensity image by photon accumulation and therefore requires specific photodetection devices. Although such a task was originally performed with raster-scanning single-photon detectors^{10}, today most implementations use single-photon sensitive cameras such as electron multiplied charge coupled devices (EMCCD)^{21,22}, intensified complementary metal-oxide-semiconductor (iCMOS)^{23} and single-photon avalanche diode (SPAD) cameras^{24}.

EMCCD cameras are the most widely used devices for imaging photon correlations thanks to their high quantum efficiency, low noise and high pixel resolution. However, these cameras suffer from a very low frame rate (~100 fps) due to their electronic amplification process operating in series and therefore require very long acquisition times (~10 h) to reconstruct correlation images^{13,14,17,25}. Using a smaller sensor area or a binning technique can reduce the acquisition time, but at the cost of a loss in pixel resolution. By using an image intensifier, iCMOS cameras do not use such slow electronic amplification processes and can therefore reach higher frame rates (~1000 fps). However, so far these cameras have only enabled correlation images with relatively small number of pixels (and still several hours of acquisition), mostly because of their low detection efficiency and higher noise level^{23,26}. Finally, SPAD cameras are an emerging technology that can detect single photons at low noise while operating at very high frame rate (up to 800 kfps) despite their low quantum efficiency (~20%) and typically low resolution (~1000 pixels).

It is clear from the above that nearly all sensor technologies currently used in quantum imaging experiments suffer from a poor pixel resolution, either directly when the technology does not have cameras with enough pixels or indirectly when the experiment can only operate in a reasonable time with a small number of pixels. In these systems, objects are therefore often undersampled, resulting in a loss of spatial information and the creation of artefacts in the retrieved images. Here, we demonstrate a quantum image processing technique based on entangled photon pairs that increases the pixel resolution by a factor two. We experimentally demonstrate it in three common photon-pair-based imaging schemes: two quantum illumination protocols using (i) a near-field illumination configuration with an EMCCD camera^{13} and a (ii) far-field configuration with a SPAD camera^{27}, and (iii) an entanglement-enabled holography system^{17}. In addition, we use our JPD pixel super-resolution method in (iv) a full-field version of N00N-state quantum holography, a scheme that has only been demonstrated so far using a scanning approach^{6}. Note that we refer to our technique as ’pixel super-resolution’ to avoid confusion with the term ’super-resolution’ describing imaging techniques capable of overcoming the classical diffraction limit.

## RESULTS

### Experimental demonstration

Figure 1a describes the experimental setup used to demonstrate the principle of our technique for the widely used case of *p* = 2 spatially entangled photons. The spatially entangled photon pairs are produced by type-I SPDC in a thin *β*-barium Borate (BBO) crystal and illuminate an object, *t*, using a near-field illumination configuration (i.e. the output surface of the crystal is imaged onto the object). The biphoton correlation width in the camera plane is estimated to be *σ* ≈ 13 μm. The object is imaged onto an EMCCD camera with a pixel pitch of Δ = 32 μm. In our experiment, *t* is a horizontal square-modulation amplitude grating. The recorded intensity image in Fig. 1b shows a region of the object composed of 10 grating periods imaged with 25 rows of pixels. It is clear that the image suffers from the effect of aliasing, due to an harmonic above the Nyquist frequency of the detector array, leading to a low-frequency Moire modulation with a period of approximately 5 pixels. If we are able to double the sampling frequency so as to super-resolve the image, it is expected that we will be able to image all the harmonics and hence remove this Moire pattern.

We also measure the spatially resolved JPD of the photon pairs Γ_{ijkl} by identifying photon coincidences between any arbitrary pair of pixels (*i*, *j*) and (*k*, *l*) centred at spatial positions \(\overrightarrow{{r}_{1}}=({x}_{i},{y}_{j})\) and \(\overrightarrow{{r}_{2}}=({x}_{k},{y}_{l})\) using the method described in^{28}. The information contained in the JPD can be used for various purposes. For example, it was used in pioneering works with EMCCD cameras to estimate position and momentum correlation widths of entangled photons pairs in direct imaging of the Einstein-Podolsky-Rosen paradox^{21,22}. In the experiment here, the diagonal component of the JPD, Γ_{ijij} reconstructs an image of the object i.e. Γ_{ijij} = ∣*t*(*x*_{i}, *y*_{j})∣^{4}. Such a diagonal image is the quantity that is conventionally measured and used in all photon-pair-based imaging schemes using a near-field illumination configuration^{5,13,25,29}. As shown in Fig. 1c, measuring the diagonal image in our experiment does not however improve the image quality i.e. the Moire pattern is still present.

There is another way to retrieve an image of the object without using the diagonal component of the JPD. For that, one can project the JPD along its sum-coordinate axis, achieved by summing Γ:

where *N*_{Y} × *N*_{X} is the number of pixels of the illuminated region of the camera sensor, *P*^{+} is defined as the sum-coordinate projection of the JPD and (*i*^{+}, *j*^{+}) are sum-coordinate pixel indexes. Such a projection retrieves an image of the object sampled over four times the number of pixels of the sensor: \({P}_{{i}^{+}{j}^{+}}^{+}=| t({x}_{i},{y}_{j}){| }^{4}\) for even pixel indices i.e. *i*^{+} = 2*i* and *j*^{+} = 2*j*, with *x*_{i} = *i*Δ and *x*_{j} = *j*Δ; \({P}_{{i}^{+}{j}^{+}}^{+}=| t({x}_{i+1/2},{y}_{j+1/2}){| }^{4}\) for odd pixel indices i.e. *i*^{+} = 2*i* + 1 and *j*^{+} = 2*j* + 1, with *x*_{i+1/2} = *x*_{i} + Δ/2 and *y*_{j} + Δ/2. Pixel resolution is therefore increased by a factor 2 (see section JPD pixel super-resolution principle for more details). Figure 1d shows the resulting sum-coordinate projection of the JPD measured in the experiment shown in Fig. 1a. Thanks to pixel super-resolution, we observe that the spurious low frequency Moire modulation has been removed and the 10 grating periods are now clearly visible. As a comparison, such a high-resolution image is very similar to a conventional intensity image acquired using a camera with half the pixel pitch i.e. 16 μm (Fig. 1e).

The frequency analysis of these different images is shown in Fig. 1f and provides more quantitative information about our approach. In particular, we observe the absence of a frequency peak at 0.2 in the sum-coordinate image spectrum (dashed blue), while it is present in both intensity (solid red) and JPD-diagonal (dashed black) image spectra. It is instead substituted by a peak at 0.8 that is the true frequency component of the object (harmonics), as confirmed by its presence in the spectrum of the intensity image acquired using the high-resolution, 16 μm-pixel-pitch camera (dashed red). Removal of the aliased frequency component corresponds also to the disappearance of the Moire pattern in the real space. This confirms that our approach achieves pixel super-resolution and retrieves information that was lost due to undersampling. Additional measurements provided in supplementary document section 3 are acquired using a camera of even lower resolution (48 μm-pixel-pitch) and show that JPD pixel super-resolution can also recovers the fundamental spectral component (main peak) even when this is absent in both the intensity and diagonal images. JPD pixel super-resolution is also confirmed by simulations detailed in supplementary document section 2.8.

Figure 1g shows system modulation transfer functions (MTF) calculated using the slanted-edge technique^{30} with different imaging modalities. The MTF obtained using the JPD sum-coordinate projection (dashed blue) is approximately 1.7 times broader than those acquired using intensity (solid red) and JPD diagonal (dashed black) images, almost matching the MTF retrieved by intensity measurement with the high-resolution 16 μm-pixel-pitch camera (dashed red). This shows that JPD pixel super-resolution not only doubles the Nyquist frequency (1/(2Δ) → 1/Δ), but also broadens the system MTF which results in less attenuation of higher frequencies. The broadening of the MTF is explained by the fact that the effective size of a ’pixel’ during a JPD measurement (i.e. the size of the surface over which the coincidences are integrated) is on average smaller than the real size of the pixels. This shows that images retrieved by JPD pixel super-resolution are similar to conventional intensity images obtained with a camera that has four times more pixels (higher Nyquist frequency) but that also has smaller pixels (broader MTF) (more details about MTF measurements are provided in Methods and in supplementary document section 4.)

Interestingly, results in Fig. 1 also lead to another conclusion that, contrary to a common belief in the field, conventional imaging with photon pairs (i.e. using the JPD diagonal) does not always improve image quality compared to classical intensity imaging. For example here, the spurious Moire effect in the diagonal image (Fig. 1c) is more intense than in the direct intensity image (Fig. 1b), which is also confirmed by a higher intensity of the 0.2 frequency peak in the diagonal image spectra (Fig. 1f).

### JPD pixel super-resolution principle

We gain further insight into the underlying principle of JPD pixel super-resolution from inspection of the JPD of a spatially entangled two-photon state: this is a 4-dimensional object containing much richer information than a conventional 2D intensity image. The JPD contains correlation information not only between photons detected at the same pixel, but also correlation information about photons detected between nearest-neighbour pixels.

Figure 2 illustrates this concept in the one-dimensional case. In Fig. 2a, photon pairs with a correlation width *σ* illuminate a one-dimensional object *t*(*x*) imaged onto an array of pixels with pitch Δ and pixel gap *δ* (i.e. the width of non-active areas between neighbouring pixels). When measuring the JPD, there are two main contributions: (i) The first contribution originates from pairs of photons detected at the same pixel (blue rays) and form the JPD diagonal elements, Γ_{kk}. Because these pairs crossed the object around the same positions as the pixels i.e. *x*_{k}, the resulting image (Fig. 2b) provides a sampling of the object Γ_{kk} ~ ∣*t*(*x*_{k})∣^{4} similar to that performed by a conventional intensity measurement. (ii) The second contribution originates from photon pairs detected by nearest-neighbour pixels and forms the JPD off-diagonal elements Γ_{kk+1}. Because these photons crossed the object around positions located between the pixels i.e. *x*_{k+1/2}, the resulting image (Fig. 2c) provides a sampling of the object Γ_{kk+1} ~ ∣*t*(*x*_{k+1/2})∣^{4} similar to an intensity measurement performed with a sensor shifted by Δ/2 in the transverse direction (see Methods for derivations of Γ_{kk} and Γ_{kk+1}).

Finally, projecting the JPD along the sum-coordinate (Eq. 1) retrieves a high-resolution image (Fig. 2d) by interlacing diagonal and off-diagonal elements. To understand this recombination, one can expand Eq. 1 in the one dimensional case for even and odd pixels:

where *N* is the number of pixel of the sensor. In theory, when operating under the constraint *σ* < Δ, correlations between non-neighbouring pixels are nearly zeros i.e. Γ_{ij} ≈ 0 if ∣*i* − *j*∣ > 2. The sum terms in Eqs. 2 and 3 then become negligible compared to Γ_{kk} and Γ_{kk+1}. It leads to \({P}_{{i}^{+} = 2k}^{+}={{{\Gamma }}}_{kk}\) and \({P}_{{i}^{+} = 2k+1}^{+}=2{{{\Gamma }}}_{kk+1}\), providing the super-resolved image. In practice, however, experimentally measured correlation values are noisy. All the noise then adds up in the sums in Eqs. 2 and 3, ultimately producing a noise with greater variance that dominates the diagonal and off-diagonal elements in the final image. To solve this issue, the JPD is filtered before performing the projection to remove the weakest correlation values i.e. all terms except Γ_{kk} and Γ_{kk+1} are set to zero. In doing so, we also remove the noise associated with these values, that do not add up in the sums, and significantly improve the quality of the final image (see Methods and Fig. 5 for more details).

In addition, it should be noted that JPD diagonal values (i.e. Γ_{ijij} in the two dimensional case) are not measured directly with single-photon cameras, as most of these devices are not photon-number resolving. To circumvent this limitation, diagonal values are estimated from correlations between either vertical (Γ_{i(j±1)ij}) or horizontal (Γ_{(i±1)jij}) direct-neighbouring pixels. This has the practical consequence of restricting the super-resolution effect to one dimension in the complementary axis. In the general case, an image super-resolved in two dimensions can still be obtained by combining two images super-resolved in each direction. In the specific case of EMCCD cameras, this is however not possible because of charge smearing^{5}. This effect compromises horizontal correlations values and truly restricts the super-resolution effect to one dimension along the vertical spatial axis (see Methods). These limitations are lifted when operating in a far-field illumination configuration^{14,17,27} (see section Application to quantum holography) and in quantum imaging schemes using two distinct cameras^{16,31}.

The previous analysis also shows that two experimental conditions must be verified to achieve pixel super-resolution. First, the sensor fill-factor must be large enough to allow photon coincidence detection between neighbouring pixels i.e. *δ* < *σ*. Second, the spatial resolution of the imaging system must be limited by the pixel size. This is true if the higher spatial frequency component of the optical field in the object plane is both smaller than the imaging system spatial frequency cut-off and larger than the sensor Nyquist frequency 1/(2Δ). In practice, one must verify at least that *σ* < Δ. Otherwise, a JPD sum-coordinate projection can still be retrieved, but will not contain more information than a conventional intensity measurement. In the following, we apply our approach to real quantum imaging experiments in which the condition *δ* < *σ* < Δ is true. We note that this condition is straightforward to satisfy. Indeed, *σ* < Δ is the starting requirement for any form of pixel super-resolution to actually make sense–without this condition, there will not even be any aliasing effects in the first place. The other condition *δ* < *σ*, is always satisfied unless the pixel fill factor is extremely low.

### Applications to quantum illumination

We demonstrate our technique on two different experimental schemes based on two-photon quantum illumination protocols described in^{13} and^{27}. In the first protocol, an amplitude object *t*_{1} is illuminated by photon pairs and imaged onto an EMCCD camera using a similar experimental setup than this shown in Fig. 1a. Simultaneously, another object *t*_{2} is illuminated by a classical source and also imaged on the camera (see Methods and supplementary document section 7.1 for more details about the experimental setup). The intensity image, Fig. 3a, shows a superposition of both quantum and classical images. The goal of such protocol is to segment the quantum object and therefore retrieve an image showing only object *t*_{1} illuminated by photon pairs, effectively removing any classical objects or noise. Conventionally, this is achieved by measuring the diagonal image shown in Fig. 3b. Using the JPD pixel super-resolution processing method, we can now perform such a task and simultaneously retrieve a pixel super-resolved image of *t*_{1} (Fig. 3c), in which we observe the clear removal of the Moire effect due to aliasing.

In addition, we reproduced the same experiment using a higher resolution camera i.e. 16 μm-pixel-pitch (Fig. 3d–f). In this case, we observe that JPD pixel super-resolution anti-aliasing is mainly visible at the edges and corners of the object, in particular with the attenuation of the so-called staircase effects. Thus, even in imaging situations where aliasing does not produce artifacts as clear as Moire effect, the JPD pixel super-resolution approach still provides clear benefits in removing it and improving the overall image quality.

We also apply our technique in another quantum illumination protocol demonstrated in^{27}. This protocol performs the same task as in Fig. 3 i.e. distilling the quantum image from the classical, but uses a far-field illumination configuration, operates in reflection and detects photons with a SPAD camera (see supplementary document section 7.2 for results and experimental arrangement).

### Application to quantum holography

We now demonstrate JPD pixel super-resolution on two different experimental quantum holography schemes. The first approach was demonstrated in^{17} and its experimental configuration is described in Fig. 4a. Pairs of photons entangled in space and polarisation illuminate an SLM and a birefringent phase object *t*_{6} positioned in each half of an optical plane using a far-field illumination configuration. The object and SLM are then both imaged onto two different parts of an EMCCD camera.

In such a far-field configuration, the information of the JPD is now concentrated around the anti-diagonal (i.e. Γ_{ij −i −j}) because photon pairs are spatially anti-correlated in the object plane^{32}. This anti-diagonal is the quantity that is conventionally measured and used in all photon-pair-based imaging schemes that use a far-field illumination configuration^{14,17,27}. In this case, the JPD pixel super-resolution technique must be adapted by using the minus-coordinate projection *P*^{−} of the filtered JPD in place of the sum-coordinate projection *P*^{+} to retrieve the high-resolution image (see Methods).

The object considered here is a section of a bird feather, shown in Fig. 4b. The SLM is used to compensate for optical aberrations and implement a four phase-shifting holographic process by displaying uniform phase patterns with values {0, *π*/2, *π*, 3*π*/2} (see Methods). In the original protocol, four different images are obtained for each step of the process by measuring the anti-diagonal component Γ_{ij −i −j} of the JPD. These images are then combined numerically to reconstruct the spatial phase of the birefringent object (Fig. 4c). Using the new JPD pixel super-resolution approach, the four images are now obtained from the minus-coordinate projection *P*^{−} of the JPD and recombined to retrieve a phase image with improved spatial resolution (Fig. 4d). We do not observe the clear removal of aliasing artefacts in the super-resolved image as in Figs. 1d and 3c due to the relatively smooth shape of the bird feather that is mostly composed of low frequencies below the Nyquist limit. Resolution improvements are therefore mainly located at the edges and corners, that visually translates into an overall improvement of the image quality.

The second approach is a full-field version of a N00N-state holographic scheme based on photon pairs (*N* = 2). N00N states are known for providing phase measurements with *N*-times better sensitivity than classical holography^{7}. Our results show that a full-field version of N00N-state holography with photon pairs not only preserves the two-fold sensitivity enhancement, but also provides a better pixel resolution, an advantage that could only be matched by a scanning approach^{6} that would also increase the acquisition time by a factor of four (see supplementary document section 6 for experimental layout and results).

## Discussion

We have introduced a pixel super-resolution technique based on the measurement of a spatially resolved JPD of spatially entangled photons. This approach retrieves spatial information about an object that is lost due to pixelation, without shifting optical elements or relying on prior knowledge. We demonstrated that this JPD pixel super-resolution approach can improve the spatial resolution in already established quantum illumination and entanglement-enabled holography schemes, as well as in a full-field version of N00N-state quantum holography, using different types of illumination (near-field and far-field) and different single-photon-sensitive cameras (EMCCD and SPAD). Our approach has the advantage that it can be used immediately in quantum imaging schemes based on photon pairs, and even in some cases by only reprocessing already acquired data. In addition, our approach can also be implemented in any classical imaging system limited by pixelation, after substituting the classical source by a source of correlated photons with similar properties. Indeed, contrary to classical pixel super-resolution techniques, such as shift-and-add approaches^{33}, wavelength scanning^{34} and machine-learning-based algorithms^{35}, the JPD pixel super-resolution approach has the advantage that it does not require displacing optical elements in the system or having prior knowledge about the object being imaged. Although we experimentally demonstrated this technique for the case of *p* = 2 (photon pair) entanglement, we anticipate that our approach could be generalised for all *p* to further increase the pixel resolution. Photon pair sources are without doubt the current experimental choice in any given lab but recent efforts have shown promising progress towards direct generation of spatially entangled three-photon states^{36}. We also underline that although the schemes shown here used spatially entangled photons, strictly speaking it is not entanglement but only spatial correlations that are used to generate the JPD. This opens the intriguing prospect for future work to investigate the potential of classical sources of light, e.g. thermal light, to achieve similar pixel super-resolution as shown here but with ready access to *p* > 2 JPDs.

## Methods

### Experimental layouts

#### Experiment in Fig. 1a

BBO crystal has 0.5 × 5 × 5 mm and is cut for type I SPDC at 405 nm with a half opening angle of 3 degrees (Newlight Photonics). It is slightly rotated around horizontal axis to ensure near-collinear phase matching of photons at the output (i.e. ring collapsed into a disk). The pump is a continuous-wave laser at 405 nm (Coherent OBIS-LX) with an output power of approximately 200 mW and a beam diameter of 0.8 ± 0.1 mm. A 650 nm-cut-off long-pass filter is used to block pump photons after the crystals, together with a band-pass filter centred at 810 ± 5 nm. The camera is an EMCCD (Andor Ixon Ultra 897) that operates at −60 ^{∘}C, with a horizontal pixel shift readout rate of 17 MHz, a vertical pixel shift every 0.3 μs, a vertical clock amplitude voltage of +4V above the factory setting and an amplification gain set to 1000. The camera sensor has a total 512 × 512 pixels with 16 μm pixel pitch and nearly unity fill factor. In Fig. 1b–d, the camera is operated with a pixel pitch of Δ = 32 μm by using a 2 × 2 binning. In Fig. 1e, the camera is operated with a pixel pitch of 16 μm. Exposure time is set to 2 ms. The camera speed is about 100 frames per second (fps) using a region of interest of 100 × 100 pixels. The two-lens imaging system *f*_{1}−*f*_{2} in Fig. 1a is represented by two lenses for clarity, but is composed of a series of six lenses with focal lengths 45, 75, 50, 150, 100, 150 mm. The first and the last lens are positioned at focal distances from the crystal and the object, respectively, and the distance between two lenses in a row equals the sum of their focal lengths. Similarly, the second two-lens imaging system *f*_{3} − *f*_{4} in Fig. 1a is composed of a series of four consecutive lenses with focal lengths 150, 50, 75, 100 mm arranged as in the previous case. The imaging system magnification is 3.3. The photon correlation width in the camera plane is estimated as *σ* ≈ 13 μm.

#### Experiment used to acquire images in Fig. 3

The experimental setup is the same as this shown in Fig. 1a, with some changes in the lenses used and the addition of an external arm to superimposed the classical image. It is shown in Fig. 15a of the supplementary document. The output surface of the crystal is imaged onto an object *t*_{1} using a two-lenses imaging system with focal lengths *f*_{5} = 35 mm and *f*_{6} = 75 mm. The object is then imaged onto the camera using a single-lens imaging system composed of one lens with focal length *f*_{7} = 50 mm positioned at a distance of 100mm from the object and the camera. Another object *t*_{2} is inserted and illuminated by a spatially filtered light-emitting diode (LED) and spectrally filtered at 810 ± 5 nm. Images of both objects are superimposed on the camera using a beam splitter. *t*_{1} and *t*_{2} are negative and positive amplitude USAF resolution charts, respectively. They are shown in Figs. 15b and c of the supplementary document. Exposure time is set to 6 ms. The imaging system magnification 2.1. The biphoton correlation width in the camera plane is estimated as *σ* ≈ 8 μm. 6.10^{6} frames were acquired to retrieve intensity image and JPD in approximately 20 h of acquisition. The same setup was used in^{13}. More details in section 7 of the supplementary document.

#### Experiment in Fig. 4a

The paired set of stacked BBO crystals have dimensions of 0.5 × 5 × 5 mm each and are cut for type I SPDC at 405 nm. They are optically contacted with one crystal rotated by 90 degrees about the axis normal to the incidence face. Both crystals are slightly rotated around horizontal and vertical axis to ensure near-collinear phase matching of photons at the output (i.e. rings collapsed into disks). The pump laser and camera are the same than in Fig. 1a. A 650 nm-cut-off long-pass filter is used to block pump photons after the crystals, together with a band-pass filter centred at 810 ± 5 nm. The SLM is a phase only modulator (Holoeye Pluto-2-NIR-015) with 1920 × 1080 pixels and a 8 μm pixel pitch. For clarity, it is represented in transmission in Fig. 4a, but was operated in reflection. Exposure time is set to 3 ms. The camera speed is about 40 frame per second using a region of interest of 200 × 200 pixels. The single-lens Fourier imaging system *f*_{11} is composed of a series of three lenses of focal lengths 45, 125, 150 mm. The first and last lenses are positioned at focal distance from the crystal and the object-SLM plane, respectively, and the distance between each pair of lenses equals the sum of their focal lengths. The two-lens imaging system *f*_{12} − *f*_{13} is in reality composed by a series of four lenses with focal lengths 150, 75, 75, 100 mm. The first and the last lens are positioned at focal distances from respectively the SLM-object and the camera, respectively, and the distance between two lenses in a row equals the sum of their focal lengths. The imaging system effective focal length is 36 mm. The photon correlation width in the camera plane is estimated as *σ* ≈ 9 μm. 2.5.10^{6} frames were acquired to retrieve intensity image and JPD in each case in approximately 17 h of acquisition. The same setup was used in^{17}. More details in section 5 of the supplementary document.

### JPD measurement with a camera

Γ_{ijkl}, where (*i*, *j*) and (*k*, *l*) are two arbitrary pair of pixels centred at positions \(\overrightarrow{{r}_{1}}=({x}_{i},{y}_{j})\) and \(\overrightarrow{{r}_{2}}=({x}_{k},{y}_{l})\), is measured by acquiring a set of *M* + 1 frames \({\{{I}^{(l)}\}}_{l\in [[1,M+1]]}\) using a fixed exposure time and then processing them using the formula:

In all the results of our work, *M* was on the order of 10^{6}–10^{7} frames. However, it is essential to note that not all the JPD values can be directly measured using this process. When using an EMCCD camera, (i) correlation values at the same pixel, i.e. Γ_{ijij}, cannot be directly measured because Eq. 4 is only valid for different pixels (*i*, *j*) ≠ (*k*, *l*)^{28}, and (ii) and correlation values between vertical pixels, i.e. Γ_{iji (j±q)} (where *q* is an integer that defines the position of a pixel above or bellow (*i*, *j*)), cannot be measured because of the presence of charge smearing effects. To circumvent this issue, these values are interpolated from neighbouring correlation values of the JPD i.e. [Γ_{ij (i+1) j} + Γ_{ij (i−1) j}]/2 → Γ_{ijij} and [Γ_{ij (i+1) (j±q)} + Γ_{ij (i−1) (j±q)}]/2 → Γ_{iji (j±q)}, as detailed in^{5}. As a result, the gain in resolution along the *x*-axis in the experiments using near-field imaging configurations (Figs. 1 and 3, and Fig. 15 and 10 of the supplementary document) is not optimal. However, it is important to note that the gain in pixel resolution along the *y*-axis is not affected by this interpolation technique. Therefore, the spectral analyses performed in Fig. 1 are also not impacted because the grid-objects used are horizontal (no spectral component on the *x*-axis) and all the resulting spectral curves are obtained by summing along the *x*-axis. In addition, this interpolation also does not affect experiments using far-field illumination configurations in Fig. 4a and Fig. 16 of the supplementary document because the JPD diagonals do not contain any relevant imaging information (that is in the JPD anti-diagonals). More details are provided in^{28} and in section 1 of the supplementary document.

### JPD pixel super-resolution in far-field illumination configuration

When an object is illuminated by photon-pairs using a far-field illumination configuration (i.e. the crystal is Fourier-imaged on the object), the JPD pixel super-resolution technique must be adapted and the sum-coordinate projection *P*^{+} cannot be used to retrieve the high-resolution image. First, instead of the diagonal, information about the object is retrieved by displaying the anti-diagonal component of the JPD i.e. Γ_{ij −i −j} ≈ ∣*t*(*x*_{i}, *y*_{i})∣^{2}∣*t*(−*x*_{i}, −*y*_{i})∣^{2}. Γ_{ij −i −j} is the quantity that is conventionally measured and used in all photon-pairs-based imaging schemes using a far-field illumination configuration^{14,17,27}. Second, instead of using the sum-coordinate projection, a super-resolved image of the object is retrieved by integrating the JPD along its minus-coordinate axis:

where *N*_{Y} × *N*_{X} is the number of pixels of the illuminated region of the camera sensor, *P*^{−} is defined as the minus-coordinate projection of the JPD and (*i*^{−}, *j*^{−}) defines the sum-coordinate pixel to which a spatial variable \(\overrightarrow{{r}^{-}}=({x}_{{i}^{-}},{y}_{{j}^{-}})\) is associated. Each value \({P}_{{i}^{-}{j}^{-}}^{-}\) is obtained by adding all the values Γ_{ijkl} located on an anti-diagonal of the JPD defined by *i* − *k* = *i*^{−} and *j* − *l* = *j*^{−} (i.e. \(\overrightarrow{{r}_{1}}-\overrightarrow{{r}_{2}}=\overrightarrow{{r}^{-}}\)). In theory, calculating the minus-coordinate projection of the JPD can therefore achieve pixel super-resolution and potentially retrieved lost spatial information of undersampled objects. However, it is important to note that the JPD anti-diagonal Γ_{ij−i−j} and minus-coordinate projection *P*^{−} images are not directly proportional to ∣*t*(*x*_{i}, *y*_{i})∣ as in the near-field configuration, but to ∣*t*(*x*_{i}, *y*_{i})∣∣*t*(−*x*_{i}, −*y*_{i})∣, which does not always enable to retrieve ∣*t*∣ without ambiguity. In works using a far-field illumination configuration, this problem is solved by illuminating *t* with only half of the photon pairs beam (i.e. *t*(*x*, *y* ≤ 0) = 1). The object then appears twice in the retrieved image (object and its symmetric), but no information is lost.

### JPD filtering

Figure 5a shows the sum-coordinate projection *P*^{+} calculated using equation (1) from the unfiltered JPD measured in Fig. 3. This image is very noisy and does not reveal the object. To solve this issue, a filtering method is applied and consists in setting to 0 all values of the JPD that have a negligible weight (i.e. values close to zero) so that their associated noise does not contribute when performing the sum. When using a near-field illumination configuration (Figs. 1 and 3, and Figs. 10 and 15 of the supplementary document), filtering is applied by setting all JPD values to zeros except from those of the main JPD diagonal Γ_{ijij} and of the eight other diagonals directly above and bellow i.e. Γ_{ij (i±1) j}, Γ_{ij i (j±1)} and Γ_{ij (i±1) (j±1)}. When using a far-field illumination configuration (Figs. 4a and 16 of the supplementary document), filtering is applied by setting all JPD values to zeros except from those of the main JPD anti-diagonal Γ_{ij −i −j} and those of the eight other anti-diagonals directly above and bellow i.e. Γ_{ij (−i±1) −j}, Γ_{ij −i (−j±1)} and Γ_{ij (−i±1) (−j±1)}. diagonal Γ_{ijij} and of the eight other diagonals directly above and bellow i.e. Γ_{ij (i±1) j}, Γ_{ij i (j±1)} and Γ_{ij (i±1) (j±1)}.

### Normalisation

Figure 5b shows the sum-coordinate projection *P*^{+} directly calculated from the filtered JPD measured in the experiment in Fig. 3 before normalisation. We observe that this image has an artefact taking the form of inhomogeneous horizontal and vertical stripes. This artefact is very similar to this commonly observed in shift-and-add super-resolution techniques^{33} and is often refereed as a ’motion error artefact’. In our case, it originates from the difference in the effective detection areas of photons pairs when they are detected by the same pixel (diagonal elements) or by neighbouring pixel (off-diagonal elements), as illustrated in Fig. 2. Using an analogy with the shift-and-add technique, our problem is equivalent to a situation in which the different shifted low-resolution images were measured using cameras with different pixel width during the first step of the process. Then, when these low-resolution images are recombined into a high-resolution one (second step of shift-and-add), the artifact appears in the resulting image because some pixels are less intense than others. In practice, the simplest way to remove this artifact is to normalize each low-resolution image by its total average intensity so that neighboring pixels in the high-resolution image are at the same level. We use such a normalization approach in our work by dividing all values of the non-zero diagonals in the filtered JPD by their spatial average value i.e. \({{{\Gamma }}}_{ij(i+{i}^{-})(j+{j}^{-})}\to {{{\Gamma }}}_{ij(i+{i}^{-})(j+{j}^{-})}/{\sum }_{i,j}{{{\Gamma }}}_{ij(i+{i}^{-})(j+{j}^{-})}\), where (*i*^{−}, *j*^{−}) identifies a specific JPD diagonal. After normalization, Fig. 3f is obtained. In the case of far-field illumination, the same normalisation is applied to the values of the non-zero anti-diagonals in the filtered JPD i.e. \({{{\Gamma }}}_{ij(i+{i}^{+})(j+{j}^{+})}\to {{{\Gamma }}}_{ij(i+{i}^{+})(j+{j}^{+})}/{\sum }_{i,j}{{{\Gamma }}}_{ij(i+{i}^{+})(j+{j}^{+})}\), where (*i*^{+}, *j*^{+}), where (*i*^{+}, *j*^{+}) identifies a specific JPD anti-diagonal.

In some cases the artefact is reduced but still visible in the resulting image even after normalisation. The persistence of this artefact is due the fact that the difference in the effective integration areas between diagonal and off-diagonal elements is too large to be accurately corrected by simple sum normalization. For example, in the experiment shown in Fig. 10a of the supplementary document, the poor quality of the SPAD camera sensor, in particular its very low fill-factor (10.5%), is probably at the origin of the remaining artefact. To further reduce this artefact, one could use more complex normalisation algorithms, such as *L*_{1} or *L*_{2} norms minimisation approaches^{37} and kernel regression^{38}, that are commonly used in shift-and-add approaches.

### Slanted-edge method

MTF measurements using the slanted-edge approach were performed with a razor blade titled by approximatively 100 mrad and positioned in the object plane of the experimental configuration shown in Fig. 1, followed by a standard method described in^{30}. Broadening of the curves is estimated by comparing spatial frequency values where MTF is 50% of its low frequency value (i.e. criteria MTF50). More details are provided in section 4 of supplementary document.

### Estimation of the photon correlation width *σ*

#### Near-field illumination configuration

The value of the correlation width in the image plane *σ* is obtained by calculating the position-correlation width at the output of the crystal using the formula \(\sqrt{\alpha L{\lambda }_{p}/(2\pi )}\) (*L* is the crystal thickness, *λ*_{p} is the pump beam wavelength and *α* is a parameter described in^{39}) and multiplying it by the magnification of the imaging system.

#### Far-field configuration

The value of *σ* is obtained by calculating the angular-correlation width of photons at the output of the crystal using the formula *λ*_{p}/(2*ω*)^{39}, (*ω* is the pump beam waist) and multiplying it by the effective focal length of the imaging system.

In our work, values of *σ* are estimated using the theory and not with the experimental techniques described in^{21,22} because these approaches fail at providing an accurate result precisely when the correlation width is smaller than the pixel width. In addition, note also that these width values are not strict bounds but correspond to standard deviation widths when modelling spatial correlations with a Gaussian model^{40}. More details are provided in section 2.4 of the supplementary document.

### Analytical derivation of Γ_{kk} and Γ_{kk+1}

We consider an unidimensional object *t*(*x*) illuminated by photon pairs using a near-field illumination configuration. Photon pairs are characterised by a two-photon wavefunction Ψ_{t}(*x*_{1}, *x*_{2}) in the object plane. JPD values Γ_{kl} are measured using an array of pixels with pitch Δ and gap *δ*. Assuming that the imaging system is not limited by diffraction but only by the sensor pixel resolution, its point spread function can be approximate by a Dirac delta function and Γ_{kl} can be formally written as:

where we assumed unity magnification between object and camera planes. A graphical representation of this integral is shown in Fig. 5c. For clarity, we only represented an array of three pixels. The bivariate function ∣*t*(*x*_{1})*t*(*x*_{2})∣^{2} is represented in green and overlaps with an grid of squares of size Δ and spacing *δ*. Each square represents an integration area associated to a specific JPD value. For example, the central square corresponds to the integration area of Γ_{kk} i.e. \([{x}_{k}-\frac{{{\Delta }}-\delta }{2},{x}_{k}+\frac{{{\Delta }}-\delta }{2}]\times [{x}_{k}-\frac{{{\Delta }}-\delta }{2},{x}_{k}+\frac{{{\Delta }}-\delta }{2}]\). In addition, the bivariate function ∣Ψ_{t}(*x*_{1}, *x*_{2})∣^{2} is represented by two dashed black lines. These two lines delimit the most intense part of the function, which corresponds to a diagonal band of width *σ* using a double Gaussian model^{40}.

We seek to calculate the JPD values Γ_{kk} and Γ_{kk+1}. Graphically, these values are located at the intersection between the grid, the green area and the surface inside the dashed lines. They are represented in blue and red, respectively. For small widths *σ* < Δ, it is clear in Fig. 5c that the blue and red integration areas are tightening around the positions (*x*_{k}, *x*_{k}) and (*x*_{k+1/2}, *x*_{k+1/2}) positions, which results in Γ_{kk} ~ ∣*t*(*x*_{k})∣^{4} and Γ_{kk+1} ~ ∣*t*(*x*_{k+1/2})∣^{4}. More formally, one can also apply a change of variable and perform a first-order Taylor expansion in Eq. 6 to reach the same results:

where \({S}_{0}\approx \sigma /2[1+2\sqrt{2}({{\Delta }}-\sigma /\sqrt{2})]\) and *S*_{1} ≈ *σ*^{2}/2. Full calculations are provided in section 2.5 of the supplementary document. Note also that the difference between integration areas *S*_{1} ≠ *S*_{2} makes the normalization step after calculating the JPD projection necessary (see Methods section *Normalization*).

## Data availability

The data generated in this study have been deposited in a database under accession code https://doi.org/10.5525/gla.researchdata.1269.

## References

Bishara, W., Su, T.-W., Coskun, A. F. & Ozcan, A. Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution.

*Opt. Express***18**, 11181–11191 (2010).Bishara, W. et al. Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array.

*Lab a Chip***11**, 1276–1279 (2011).Moreau, P.-A., Toninelli, E., Gregory, T. & Padgett, M. J. Imaging with quantum states of light.

*Nat. Rev. Phys.***1**, 367–380 (2019).Boto, A. N. et al. Quantum interferometric optical lithography: exploiting entanglement to beat the diffraction limit.

*Phys. Rev. Lett.***85**, 2733–2736 (2000).Reichert, M., Defienne, H. & Fleischer, J. W. Massively parallel coincidence counting of high-dimensional entangled states.

*Sci. Rep.***8**, 7925 (2018).Ono, T., Okamoto, R. & Takeuchi, S. An entanglement-enhanced microscope.

*Nat. Commun.***4**, 2426 (2013).Israel, Y., Rosen, S. & Silberberg, Y. Supersensitive polarization microscopy using NOON states of light.

*Phys. Rev. Lett.***112**, 103604 (2014).Brida, G., Genovese, M. & Berchera, I. R. Experimental realization of sub-shot-noise quantum imaging.

*Nat. Photonics***4**, 227–230 (2010).Camphausen, R. et al. A quantum-enhanced wide-field phase imager.

*Sci. Adv.***7**, eabj2155 (2021).Pittman, T. B., Shih, Y. H., Strekalov, D. V. & Sergienko, A. V. Optical imaging by means of two-photon quantum entanglement.

*Phys. Rev. A***52**, R3429–R3432 (1995).Aspden, R. S., Tasca, D. S., Boyd, R. W. & Padgett, M. J. EPR-based ghost imaging using a single-photon-sensitive camera.

*N. J. Phys.***15**, 073032 (2013).Lopaeva, E. D. et al. Experimental realization of quantum illumination.

*Phys. Rev. Lett.***110**, 153603 (2013).Defienne, H., Reichert, M., Fleischer, J. W. & Faccio, D. Quantum image distillation.

*Sci. Adv.***5**, eaax0307 (2019).Gregory, T., Moreau, P.-A., Toninelli, E. & Padgett, M. J. Imaging through noise with quantum illumination.

*Sci. Adv.***6**, eaay2652 (2020).Gregory, T., Moreau, P.-A., Mekhail, S., Wolley, O. & Padgett, M. J. Noise rejection through an improved quantum illumination protocol.

*Sci. Rep.***11**, 21841 (2021).Devaux, F., Mosset, A., Bassignot, F. & Lantz, E. Quantum holography with biphotons of high Schmidt number.

*Phys. Rev. A***99**, 033854 (2019).Defienne, H., Ndagano, B., Lyons, A. & Faccio, D. Polarization entanglement-enabled quantum holography.

*Nat. Phys.***17**, 591–597 (2021).Lemos, G. B. et al. Quantum imaging with undetected photons.

*Nature***512**, 409–412 (2014).Kviatkovsky, I., Chrzanowski, H. M., Avery, E. G., Bartolomaeus, H. & Ramelow, S. Microscopy with undetected photons in the mid-infrared.

*Sci. Adv.***6**, eabd0264 (2020).Vanselow, A. et al. Frequency-domain optical coherence tomography with undetected mid-infrared photons.

*Optica***7**, 1729–1736 (2020).Moreau, P.-A., Mougin-Sisini, J., Devaux, F. & Lantz, E. Realization of the purely spatial Einstein-Podolsky-Rosen paradox in full-field images of spontaneous parametric down-conversion.

*Phys. Rev. A***86**, 010101 (2012).Edgar, M. P. et al. Imaging high-dimensional spatial entanglement with a camera.

*Nat. Commun.***3**, 984 (2012).Chrapkiewicz, R., Wasilewski, W. & Banaszek, K. High-fidelity spatially resolved multiphoton counting for quantum imaging applications.

*Opt. Lett.***39**, 5090–5093 (2014).Unternährer, M., Bessire, B., Gasparini, L., Stoppa, D. & Stefanov, A. Coincidence detection of spatially correlated photon pairs with a monolithic time-resolving detector array.

*Opt. Express***24**, 28829–28841 (2016).Toninelli, E. et al. Resolution-enhanced quantum imaging by centroid estimation of biphotons.

*Optica***6**, 347–353 (2019).Chrapkiewicz, R., Jachura, M., Banaszek, K. & Wasilewski, W. Hologram of a single photon.

*Nat. Photonics***10**, 576–579 (2016).Defienne, H., Zhao, J., Charbon, E. & Faccio, D. Full-field quantum imaging with a single-photon avalanche diode camera.

*Phys. Rev. A***103**, 042608 (2021).Defienne, H., Reichert, M. & Fleischer, J. W. General model of photon-pair detection with an image sensor.

*Phys. Rev. Lett.***120**, 203604 (2018).Reichert, M., Defienne, H., Sun, X. & Fleischer, J. W. Biphoton transmission through non-unitary objects.

*J. Opt.***19**, 044004 (2017).Burns, P. D. et al. Slanted-edge mtf for digital camera and scanner analysis. In

*Is and Ts Pics Conference*, 135–138 (Society for Imaging Science & Technology, 2000).Devaux, F., Mosset, A., Moreau, P.-A. & Lantz, E. Imaging spatiotemporal Hong-Ou-Mandel interference of Biphoton states of extremely high Schmidt number.

*Phys. Rev. X***10**, 031031 (2020). Publisher: APS.Schneeloch, J. & Howell, J. C. Introduction to the transverse spatial correlations in spontaneous parametric down-conversion through the biphoton birth zone.

*J. Opt.***18**, 053501 (2016).Farsiu, S., Robinson, M., Elad, M. & Milanfar, P. Fast and robust multiframe super resolution.

*IEEE Trans. Image Process.***13**, 1327–1344 (2004).Luo, W., Zhang, Y., Feizi, A., Göröcs, Z. & Ozcan, A. Pixel super-resolution using wavelength scanning.

*Light.: Sci. Appl.***5**, e16060–e16060 (2016).Shi, W. et al. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. 1874–1883 (2016). https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Shi_Real-Time_Single_Image_CVPR_2016_paper.html.

Borshchevskaya, N. A., Katamadze, K. G., Kulik, S. P. & Fedorov, M. V. Three-photon generation by means of third-order spontaneous parametric down-conversion in bulk crystals.

*Laser Phys. Lett.***12**, 115404 (2015).Farsiu, S., Elad, M. & Milanfar, P. A practical approach to superresolution. In

*Visual Communications and Image Processing 2006*, vol. 6077, 607703 (International Society for Optics and Photonics, 2006). https://www.spiedigitallibrary.org/conference-proceedings-of-spie/6077/607703/A-practical-approach-to-superresolution/10.1117/12.644391.short.Takeda, H., Farsiu, S. & Milanfar, P. Kernel regression for image processing and reconstruction.

*IEEE Trans. Image Process.***16**, 349–366 (2007).Chan, K. W., Torres, J. P. & Eberly, J. H. Transverse entanglement migration in Hilbert space.

*Phys. Rev. A***75**, 050101 (2007).Fedorov, M. V., Mikhailova, Y. M. & Volkov, P. A. Gaussian modelling and Schmidt modes of SPDC biphoton states.

*J. Phys. B: At., Mol. Optical Phys.***42**, 175503 (2009).

## Acknowledgements

D.F. acknowledges financial support from the Royal Academy of Engineering Chair in Emerging Technology, UK Engineering and Physical Sciences Research Council (grants EP/T00097X/1 and EP/R030081/1) and from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 801060. H.D. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 840958. JZ has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 754354.

## Author information

### Authors and Affiliations

### Contributions

H.D. conceived the original idea, designed and performed experiments, analysed the data and prepared the manuscript. P.C. performed the slanted-edge experiment. P.C. and H.D. performed the simulations. M.R. and J.F. contributed to developing the quantum illumination protocol with the EMCCD camera. J.Z. and E.C. contributed to developing the quantum illumination protocol with the SPAD camera. A.L. and B.N. contributed to developing the entanglement-enabled quantum holography protocol. A.R.H. participated to the data analysis. All authors contributed to the manuscript. D.F. supervised the project.

### Corresponding authors

## Ethics declarations

### Competing interests

The authors declare that there are no conflicts of interest related to this article. For the sake of transparency, the authors would like to disclose that E.C. holds the position of Chief Scientific Officer of Fastree3D, a company which is active in LIDAR and consumer electronics, and that he is co-founder of Pi Imaging Technology. Both companies have not been involved with the paper drafting, and at the time of writing have no commercial interests related to this article.

## Peer review

### Peer review information

*Nature Communications* thanks Eric Lantz and the other anonymous reviewers for their contribution to the peer review of this work.

## Additional information

**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Supplementary information

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Defienne, H., Cameron, P., Ndagano, B. *et al.* Pixel super-resolution with spatially entangled photons.
*Nat Commun* **13**, 3566 (2022). https://doi.org/10.1038/s41467-022-31052-6

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41467-022-31052-6

## Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.