Abstract
Lightfield microscopy represents a promising solution for microscopic volumetric imaging, thanks to its capability to encode information on multiple planes in a single acquisition. This is achieved through its peculiar simultaneous capture of information on light spatial distribution and propagation direction. However, stateoftheart lightfield microscopes suffer from a detrimental loss of spatial resolution compared to standard microscopes. In this article, we experimentally demonstrate the working principle of a new scheme, called Correlation Lightfield Microscopy (CLM), where the correlation between two light beams is exploited to achieve volumetric imaging with a resolution that is only limited by diffraction. In CLM, a correlation image is obtained by measuring intensity correlations between a large number of pairs of ultrashort frames; each pair of frames is illuminated by the two correlated beams, and is exposed for a time comparable with the source coherence time. We experimentally show the capability of CLM to recover the information contained in outoffocus planes within threedimensional test targets and biomedical phantoms. In particular, we demonstrate the improvement of the depth of field enabled by CLM with respect to a conventional microscope characterized by the same resolution. Moreover, the multiple perspectives contained in a single correlation image enable reconstructing over 50 distinguishable transverse planes within a 1 mm^{3} sample.
Similar content being viewed by others
Introduction
Rapid imaging of three dimensional samples at the diffraction limit is a longstanding challenge of microscopy^{1}. Many attempts are being made to address the need for rapid imaging of large volumes, with acquisition speed sufficient to analyze dynamic biological processes, all leading to different kinds of tradeoff. Progresses in this field include depth focal scanning with tunable lenses^{2,3}, lightsheet illumination^{4}, also employing nondiffracting beams^{5,6,7,8,9,10}, fast STED^{11}, fast twophoton microscopy^{12}, and multifocus multiplexing^{13}. Software techniques such as compressive sensing and computational microscopy^{14} are also employed to improve performances. In this perspective, lightfield microscopy is among the most promising techniques. By detecting both the spatial distribution and the propagation direction of light, in a single exposure, lightfield imaging has introduced the possibility to refocus outoffocus parts of threedimensional samples, in postprocessing. The depth of field (DOF) within the imaged volume can thus be extended by stacking refocused planes at different distances^{15,16,17,18,19,20}. However, in its traditional implementation, lightfield imaging is affected by the fundamental barrier imposed by the resolution versus DOF compromise. In the microscopic domain, this tradeoff is particularly suboptimal, since the required high resolution strongly limits the DOF, making it necessary to perform multiple scanning to characterize a thick sample^{21}. In microscopy applications, lightfield microscopy could offer a solution to the bottlenecks of long acquisition times, typical of scanning approaches, and the unbearably large amount of data, typical of multifocus multiplexing. However, its widespread application has been stifled by the degraded resolution, far away from the diffraction limit^{16,18}. Nevertheless, fostered by the development of image analysis tools and deconvolution algorithms that provide a partial recovery of resolution^{22,23,24}, lightfield imaging has shown its potential in neuroscience applications, where it was employed to analyze firing neurons in large areas^{25}. A miniaturized version of a lightfield microscope was also recently employed to enable microscopy in freely moving mice^{26}.
In this article, we provide the experimental demonstration of a novel method to perform lightfield microscopy with diffractionlimited resolution, discussing its realization and testbed applications. The new technique, capable of beating the classical microscopy limits by exploiting the statistical properties of light^{27,28,29,30,31,32}, employs the working principle of Correlation Plenoptic Imaging (CPI)^{33,34,35,36,37,38}, in which lightfield imaging is performed at the diffraction limit by measuring correlations between intensity fluctuations at two disjoint detectors^{39,40,41,42,43}. Previous CPI architectures were limited to brightfield operation relying on mask objects. Here, we design a CPI architecture suitable for different microscopy modalities (fluorescence, polarization, darkfield) that are critical for biological applications. To this end, the sample is illuminated with the whole light beam from a chaotic light source^{44}, rather than by just one beam out of a correlated beam pair^{33,36,37}. This enables imaging selfemitting, scattering and diffusive samples, as well as performing birefringent imaging, without sacrificing the retrieved correlation. Further advantages of the proposed Correlation Lightfield Microscopy (CLM) over previous ones are the speedup of the image acquisition by over one order of magnitude, and the capability of monitoring the sample through conventional (i.e., intensitybased) diffractionlimited microscopy.
The comparison reported in Table 1 clarifies the expected theoretical improvements offered by CLM, in terms of both resolution and DOF^{44}, with respect to both standard microscopy and conventional lightfield microscopy^{46}. The first column highlights the diffractionlimited imaging capability that CLM shares with conventional microscopy, as opposed to the sacrificed image resolution of conventional lightfield imaging. The factor \(N_u\) that quantifies the resolution loss of conventional lightfield imaging is defined by the number of resolution cells, per side, dedicated to directional information, and is proportional to the DOF improvement. The second and third columns of the table report the DOF of the three methods, respectively, for object details at the resolution limit and object details of arbitrary size. In lightfield imaging, the latter represents the refocusing range, while the former determines the axial resolution in the focused plane. In conventional lightfield microscopy, increasing the refocusing range (i.e., choosing large values of \(N_u\)) entails a proportional loss of transverse resolution, and an even more detrimental loss of axial resolution (proportional to \(N_u^2\)); this generally limits \(N_u\) to values smaller than 10. Furthermore, for object details larger than the resolution limit, both the DOF of standard microscopy and the refocusing range of conventional lightfield microscopy scale linearly with the size of the object details; this is due to the “circle of confusion” generated by the finite numerical aperture of the imaging system. In CLM, instead, the refocusing range scales quadratically with the size of object details, and is only limited by diffraction at the object (see Refs. ^{33,37,44} for a detailed discussion; specifically, the DOF extension for CLM derives from Eq. (23) of Ref. ^{44}). These are key features in ensuring the unique refocusing advantage of CLM. The fourth column represents the axial resolution of the three techniques, as defined by the circle of confusion (see “Materials and methods” section for details). The ratio between the third and the fourth columns can be regarded as the number of independent axial planes that each technique is capable of providing: The axial resolution is the same for all three imaging methods, but the scaling of the DOF with the square of the object resolution a, in CLM, implies a linear scaling of the number of independent axial planes with a; on the contrary, in standard lightfield, the number of independent axial planes is fixed by \(N_u\), which is generally significantly smaller than \(3.3 \mathrm {NA}_0 a/\lambda\). The last column of Table 1 also indicates that the refocusing range of both lightfield microscopes is strictly related with the viewpoint multiplicity, defined as the number of available viewpoints, per side, on the threedimensional sample: In all the considered imaging modalities, the viewpoint multiplicity is proportional to the aforementioned number of independent axial planes that can be refocused, given the size of the details of interest. In sharp contrast with conventional lightfield microscopy, the viewpoint multiplicity of CLM can be even one order of magnitude larger than in conventional lightfield imaging, without the diffractionlimited resolution to be affected; this is especially true for imaging systems with a large numerical aperture and for refocusing far away from the focused plane (i.e., for large values of object details a). Most important, the DOF extension capability of CLM is independent on the numerical aperture of the imaging systems (see “Materials and methods” for details); a large numerical aperture can thus be chosen for maximizing the volumetric resolution without affecting the DOF. This is very different from conventional lightfield microscopy, where the numerical aperture and \(N_u\) need to be properly chosen to achieve an acceptable compromise of resolution and DOF.
The paper is organized as follows. In the “Results” section, we outline the theoretical basis of the method and show the experimental results. In the “Discussion” section, we discuss the results, their impact on state of the art, and the possibilities of further improvement. In the “Materials and methods” section, we provide a detailed description of the experimental setup and on the methods to extract relevant information from correlation measurements.
Results
Concept
The correlation lightfield microscope, schematically represented in Fig. 1, is based on a conventional microscope made of an objective lens (O) and a tube lens (T) to reproduce the image of the sample on a high resolution sensor array (detector \(\mathrm {D}_a\)); this microscope can properly reconstruct only the slice of the threedimensional object falling within its DOF. The capability of CLM to refocus outoffocus parts of the threedimensional sample comes from its ability to also gain directional information about light coming from the sample. In our architecture, this is done by means of the beam splitter (BS) that reflects a fraction of light emerging from the objective lens toward an additional lens (L), which images the objective lens on a second high resolution sensor array (detector \(\mathrm {D}_b\)). Further details on the experimental setup are reported in the “Materials and methods” section.
The threedimensional sample is a chaotic light emitter or, alternatively, a diffusive, transmissive, or reflective sample illuminated by an external chaotic light source. The chaotic nature of light enables lightfield imaging thanks to the rich information encoded in correlations between intensity fluctuations. In fact, the intensity retrieved by pixels simultaneously illuminated on the two disjoint detectors \(\mathrm {D}_a\) and \(\mathrm {D}_b\) is employed in CLM to evaluate the correlation function
where \(\langle \dots \rangle\) is the average over the source statistics, \(I_{a}(\varvec{\rho }_{a})\) and \(I_{b}(\varvec{\rho }_{b})\) are the intensities at the transverse positions \(\varvec{\rho }_a\) and \(\varvec{\rho }_b\) on detectors \(\mathrm {D}_a\) and \(\mathrm {D}_b\), within the same frame, respectively, and \(\Delta I_{j} (\varvec{\rho }_{j}) = I_{j}(\varvec{\rho }_{j})  \langle I_{j} (\varvec{\rho }_{j}) \rangle\), \(j=a,b\). The statistical reconstruction of the correlation function requires, under the hypotheses of a stationary and ergodic source, to collect a set of N independent frames. In the bestcase scenario, the exposure time of each frame is matched with the coherence time of the source.
The lightfield imaging capability of CLM explicitly emerges when considering the geometrical optics limit of the above correlation function, which reads^{44,45}
where \(F(\varvec{\rho }_s)\) is the intensity profile of light from the sample, \(P(\varvec{\rho }_O)\) is the intensity transmission function of the objective, f is the distance from the objective of the generic plane within the threedimensional object, \(f_T\) is the focal length of the tube lens, and \(M_L\) is the magnification of the image of the objective lens retrieved by \(\mathrm {D}_b\). When the plane of interest is on focus (i.e., \(f=f_O\), with \(f_O\) the focal length of the objective), the correlation simply gives a focused image identical to the one retrieved by detector \(\mathrm {D}_a\). However, as shown in Fig. 2, points of the threedimensional samples that are outoffocus (i.e., they lie in planes at a distance \(f \ne f_O\) from the objective) are seen as shifted, and their displacement depends on the specific pixel \(\varvec{\rho }_b\) chosen on sensor \(\mathrm {D}_b\), corresponding to the point \(\varvec{\rho }_O=\varvec{\rho }_b/M_{L}\) on the objective. In other words, for threedimensional samples that are thicker than the natural DOF of the microscope, different values of \(\varvec{\rho }_b\) correspond to different choices of the point of view on the sample: The correlation function in Eq. (1) has the form of a fourdimensional array, characterized by both detector coordinates \((x_a, y_a, x_b, y_b)\), encoding all the spatial and angular information needed for refocusing and multiperspective image. By fixing the coordinates \((x_b,y_b)\) of the 4D array, one makes a “slice” of the correlation function, which corresponds to selecting an image of the sample from a chosen viewpoint on the objective lens. This property enables to detect the position of sample details, in three dimensions, and to highlight hidden parts of the sample. The refocused image of a sample plane placed at an arbitrary distance f from the objective can be obtained by properly stacking and summing such different perspectives^{44,45}:
with \(M=f_T/f_O\) being the natural microscope magnification. The refocusing procedure increases significantly the signaltonoise ratio (SNR) with respect to the one characterizing the single perspective associated with \(\varvec{\rho }_b\). Notice that the technique can be generalized to the case of samples with axially varying diffraction index, upon replacing physical distances with optical distances. Moreover, reconstruction of the correlation function was proved to be robust against the presence of turbulence and scattering surrounding the sample, within a limit distance defined by the emitted light wavelength and transverse coherence^{44}.
Before moving on to the experimental demonstration of CLM, let us spend a few words on the advantages offered by the capability of CLM to collect a large number of viewpoints, as enabled by the large objective plane on which perspectives can be selected, and by the consequent wide angle from which the sample can be observed. First, refocusing shares with 3D imaging the close connection between maximum observation angles and achievable DOF. Second, the higher the number of points of view that are superimposed to obtain the refocused image (see the integration over \(\rho _b\) in Eq. (3)), the more effective will be both the enhancement of features on the plane of interest and the suppression of contributions from neighboring planes, as well known in 3D imaging. Moreover, superimposing a large number of perspectives to form the final image is advantageous in terms of noise reduction. Indeed, when points of view with statistically independent noise are summed, the process results in a pointwise increase of the signaltonoise ratio that is proportional to the square root of the number of contributions.
Volumetric imaging by CLM
The refocusing and depth mapping capability of CLM has preliminarly been tested with a simple and controllable threedimensional object, made of two planar resolution targets (named \(3\mathrm {D}_1\) and \(3\mathrm {D}_2\)) placed at two different distances from the objective lens, well outside its natural DOF. Data have been acquired by focusing the correlation lightfield microscope onto a plane that did not contain neither one of the two test targets. Correlation measurements, combined with Eq. (3), has been employed to refocus the two test targets, separately, starting from this dataset. The illuminated parts of both targets contain triple slits with centertocenter distance \(d = 49.6\,\upmu \mathrm {m}\) and slit width \(a=d/2\); the overall linear field of view (FOV) is \(0.54\,\mathrm {mm}\). The test targets are placed at a distance of \(2.5\, \mathrm {mm}\) from each other (i.e. \(f_{\text {3D}_1}  f_O = 1250\,\mathrm {\upmu m}\) and \(f_{\text {3D}_2}f_O = 1250\,\mathrm {\upmu m}\), where \(f_O\) is the focal length of the objective lens), which is 6 times larger than the natural DOF of the microscope at the given size of the sample details (i.e., the circle of confusion, see Table 1). The reported results have been obtained by evaluating the correlation function over \(N=5\times 10^3\) acquired frames; we report in the Supplementary Information further details on SNR variation with the number of collected frames.
The improvement of CLM over standard microscopy can be observed in Fig. 3a, where we report the resolutionversusDOF compromise in the two microscopes. In particular, the curves indicate the resolution limits of CLM and a standard microscope (SM) with the same numerical aperture, as a function of the distance from the objective focal plane. The resolution limit is defined in both cases as the value d of the centertocenter distance between two slits of width \(a=d/2\), such that their images can be discriminated at 10% visibility; such a definition generalizes the Rayleigh criterion for resolution to outoffocus images. For a fixed slit separation d (vertical axis), one can identify the longitudinal range \(ff_O\) (horizontal axis) where the images of the two slits can be discriminated, based on our theoretical results. Points labeled from A to E and from A’ to D’ in Fig. 3a were experimentally investigated to demonstrate the agreement of the experimental results with the theoretical prediction of the resolution and DOF limits. We explored the range between \(1\) mm and \(+1\) mm along the optical axis, in steps of \(250\ \upmu\)m, by employing different tripleslit masks of a planar resolution test target, characterized by centertocenter distances ranging from \(44\, \upmu \mathrm {m}\) (A and A’) to \(4\, \upmu \mathrm {m}\) (E). In particular, point E is close to the diffraction limit of a standard microscope and shows that CLM is capable of the same resolution at focus. The experimental validation of the CLM refocusing capability for the cases A and D’ is reported in Fig. 3b; the refocused images obtained in all the other cases are reported for completeness in the Supplementary Information. The red points in Fig. 3a identify the parameters of the resolution targets \(3\mathrm {D}_1\) and \(3\mathrm {D}_2\) that compose our threedimensional test object. The successful experimental refocusing of both targets, reported in Fig. 3c, demonstrates that CLM enables achieving a 6 times larger DOF than standard microscopy, at the given resolution, or, alternatively, a 6 time better resolution, at the given DOF. The leftmost panel reports for comparison the standard microscope image of the threedimensional test object, in which both target planes are clearly out of focus and none of the tripleslit groups can be recognized. Remarkably, the sets of triple slits placed at \(f_{\text {3D}_2}f_O=1250\,\upmu \mathrm {m}\) (i.e., the object placed farthest from the objective) are still perfectly resolved. This means that the resolution achieved on farther planes is not heavily influenced by the presence of other details placed along the optical path, despite the substantial spatial filtering that they perform.
The results in Fig. 3c also demonstrate that CLM improves by over one order of magnitude the acquisition speed with respect to previous correlationbased lightfield imaging protocols^{37,38}, where \(5\times 10^4\) frames (to be compared with the current \(5\times 10^3\)) and additional lowpass Gaussian filtering were employed in postprocessing to achieve a comparable SNR. This improvement directly comes from the elimination of ghost imaging from the CLM architecture, and its replacement by conventional imaging at both sensor arrays. Actually, correlation between direct images has been shown to enable a significant improvement of the SNR with respect to ghost imaging^{38}.
After the threedimensional target, we tested the effectiveness of CLM on a thick phantom reproducing features of interest in biomedical applications; the sample is made of birefringent starch granules, suspended at random positions in a transparent nonbirefringent gel. The focused plane inside the sample was arbitrarily chosen at approximately half of its thickness. In Fig. 4a, we show the standard image of the focused plane, while Fig. 4b reports the images of four different planes refocused by CLM, located at an optical distance from the focused plane of \(10\,\upmu \mathrm {m}\), \(130\,\upmu \mathrm {m}\), \(310\,\upmu \mathrm {m}\), and \(+200\,\upmu \mathrm {m}\), respectively. It is evident that some aggregates appear focused in only one of the four images, which provide a tool to identify their longitudinal optical distance from the focal plane. The volumetric resolution of CLM enabled us to refocus 54 planes over a \(1\,\mathrm {mm}\) thick volume, with a transverse resolution smaller than \(20\,\upmu \mathrm {m}\) and a longitudinal resolution smaller than \(90\,\upmu \mathrm {m}\), within a FOV of about \(1\,\mathrm {mm}^2\) (see video in the Supplementary Information).
Interestingly, in the current CLM architecture, the SNR is high enough for images from different viewpoints to be effectively observable (hence, available for further data analysis, such as threedimensional reconstruction). In Fig. 5, we report the change of perspective obtained by CLM when moving the “viewpoint” on the objective lens plane, along the horizontal direction: While the position of details at focus does not change with the particular perspective, outoffocus starch granules shift along the horizontal direction as the point of view is changed. Through a single correlation image, we have acquired 130,000 images of the sample from different viewpoints, distributed over the area of the objective lens (\(\sim 1\,\mathrm {cm}^2\)), each one characterized by a diffractionlimited spatial resolution of \(40\,\upmu \mathrm {m}\). Such a high number of statistically independent perspectives has allowed us to produce viewpoints images in which the details of the object can be clearly distinguished, a feature which is particularly relevant in view of implementing 3D reconstruction algorithms based on viewpoint multiplicity.
Discussion
The refocusing capability of CLM has brought to a 6 times larger DOF than in a conventional microscope having the same numerical aperture, and the same (diffractionlimited) resolution in the focused plane. These results are in excellent agreement with the expected refocusing range of CLM^{44} at the given resolution (\(d=50\,\upmu \mathrm {m}\)), thus showing the reliability of the proposed CLM architecture. The volumetric resolution achieved by CLM to image a complex thick sample (\(1 \times 1 \times 1 \,\mathrm {mm}^3\)) is a further very interesting result, considering the scanningfree nature of CLM. In the employed CLM setup, we obtained 54 independent axial sections of the biological phantom. Considering how the axial and transverse resolution change along the optical axis, this corresponds to a total \(7.21\times 10^6\) voxels within the volume of interest. Notice that the device considered in this work is meant as a proofofprinciple demonstrator, and its parameters are not optimized. However, the properties reported in Table 1 provide a guideline to scale the setup towards smaller resolutions in view of real applications. All these results are unattainable by standard lightfield microscopy, due to its hyperbolic tradeoff between spatial resolution and multiperspective views (hence, maximum achievable DOF).
CLM shares with traditional lightfield microscopy the capability of performing threedimensional imaging without moving the sample nor any part of the optical apparatus; resolution depends on the distance from the focal plane, as shown in Fig. 3, but is uniform in the transverse directions, as far as light propagation can be considered paraxial^{44}. These features can be compared to the properties of a consolidated technique such as confocal microscopy, which requires both longitudinal and transverse scanning to perform imaging with uniform volumetric resolution, and of a much less timeconsuming technique such as lightsheet microscopy with nondiffracting beams, which requires only longitudinal scanning, trading this interesting feature with inhomogeneous illumination and resolution in the transverse direction. The main drawback of CLM lies in its operational definition: while in incoherent firstorder imaging the SNR can be increased by just exposing the sensor for a time much larger than the source coherence time, a reliable reconstruction of the correlation function (1) requires to collect a large number of distinct frames, whose duration should be preferably matched with the source coherence time.
Increasing the acquisition speed of CLM is the principal challenge that needs to be addressed to guarantee its competitiveness with stateofthe art lightfield microscopes^{25}. Such speedup is in fact of paramount importance both for avoiding radiation damage of biomedical samples, for in vivo imaging, and for studying dynamic processes. The large SNR of CLM with respect to the original CPI scheme represents a first significant step in this direction, as it enabled to increase the acquisition speed by one order of magnitude, still guaranteeing an even higher SNR (see Ref. ^{33}). Similar to the approach for noise mitigation implemented here (Figs. 4, 5), and outlined in the “Materials and methods” section, a further step toward acquisition speedup is compressive sensing and deep learning techniques, as increasingly applied to imaging tasks^{47,48,49,50}. From the hardware viewpoint, the acquisition speed of our microscope has ample room for improving, both by investigating possible optimizations in our current acquisition routine and by employing cameras with better time performance. The most immediate way to start boosting the time performance of the current CLM, for example, is to employ the camera (see “Materials and methods” section) in rollingshutter mode, rather than global shutter, which we have been using to guarantee that the retrieved intensity patterns \(I_a\) and \(I_b\) (which are then correlated pixel by pixel) are simultaneous statistical sampling of the chaotic source. This condition is consistent with the theoretical model (i.e., Eq. (1)), but it is certainly interesting to search for a regime in which moving slightly away from the theory introduces small enough artifacts to justify the gain in speed. With our camera, this could mean even doubling the frame rate, reducing the current acquisition time to about 20 seconds (from the present 43). Also the chaotic source can be significantly improved by replacing the groundglass disk now in use (see the “Materials and methods” section) with a digital micromirror device (DMD), which adds versatility and available statistics, while significantly decreasing the source coherence time due to its typical frame rate of about 30 kHz. Also, since the DMD patterns are completely usercontrollable, their features can be customized to achieve the desired SNR with the lowest number of frames possible, even experimenting with structured illumination. In this scenario, the acquisition speed will essentially be limited by the maximum frame rate of the sensor and, eventually, by the data transferring speed. This issue can be addressed by replacing our current sCMOS with faster cameras, capable of reaching 6.6 kfps at full resolution^{51}, or with ultrafast highresolution SPAD arrays, enabling acquisition rates as high as \(10^5\) binary frames per second, in a \(512 \times 512\) array^{52,53}. When choosing alternative cameras, speed should not be favored at the expenses of readout noise, dynamic range, detection efficiency, or minimum exposure time, all which are relevant parameters in correlationbased imaging. In this respect, SPAD arrays are of particular interest due to their much shorter minimum exposure time, ranging from a few hundreds of ps to 10 ns^{52,53,54,55}, although their binary nature may pose challenges. The minimum exposure time of the camera also regulates the possibility of extending CLM to uncontrolled thermal sources, including the fluorescence samples at the core of the related microscopy technique. CLM and fluorescence microscopy are certainly compatible due to the chaotic nature of fluorescent light, but an experimental challenge needs to be addressed in such a context: matching the low coherence time of fluorescent light with the minimum exposure time of the sensor. An analogous problem was successfully faced by HanburyBrown and Twiss in the Narrabri stellar interferometer^{56} and in more recent correlation imaging experiments performed with sunlight^{57,58}. In the context of CLM, we shall address this challenge in future works.
We finally remark that the need for many short exposure times to create a single image is common to other established microscopy techniques, such as STORM^{59}. This result is encouraging in view of avoiding photobleaching and photodamage, also considering that the SNR requirements on each single frame of CLM are much weaker than in STORM, where the signal should be clear enough to allow centroid estimation. Exposurerelated problem could also be reduced by modulating illumination in time, with a frequency matched with the acquisition frame rate.
Materials and methods
Experimental setup
The experimental setup employed to demonstrate CLM is shown in Fig. 6. The controllable chaotic light source is a singlemode laser with wavelength \(\lambda =532\,\mathrm {nm}\) (CNI MLLIII532300 mW) illuminating a rotating ground glass disk (GGD), with diffusion angle \(\theta _d \simeq 14^{\circ }\), whose speed defines the source coherence time (\(\approx 90 \upmu\)s). The laser spot size on the disk is enlarged to a diameter of 8 mm by a \(6 \times\) beam expander, and the sample is placed at a distance of \(10\,\mathrm {mm}\) after the GGD; the effective numerical aperture of our systems is thus \(\mathrm {NA} = 0.23\) which defines our expected diffractionlimited resolution \(\delta = 1.6\,\upmu \mathrm {m}\). Light transmitted by the object propagates toward the objective lens O, with focal length \(f_O = 30\,\mathrm {mm}\), and reaches the first polarizing beam splitter (PBS) where it is divided in two beams. The transmitted beam reaches the tube lens T, with focal length \(f_T = 125\,\mathrm {mm}\), and then impinges on the part of the sensor identified with \(\mathrm {D}_a\). The distance between the objective lens O and the tube lens T is equal to the sum of the focal lengths of the two lenses, \(f_O + f_T\), and the distance between T and \(\mathrm {D}_a\) coincides with \(f_T\). The focused image plane thus lies at a distance \(f_O\) from the objective lens. The beam reflected off the PBS illuminates lens L, with focal length \(f_L = 150\,\mathrm {mm}\), then impinges on the part of the sensor identified with \(\mathrm {D}_b\), after being reflected by the second PBS. The distance \(S_O\) between the objective lens O and the lens L, and the distance \(S_I\) between L and \(\mathrm {D}_b\) are conjugated and the front aperture of the objective is imaged on \(\mathrm {D}_b\). The measured magnification of such image is \(M_L = 0.31\). Two disjoint halves of the same camera (Andor Zyla 5.5 sCMOS) are employed to simulate the two sensors \(\mathrm {D}_a\) and \(\mathrm {D}_b\), in order to guarantee synchronization. To fully exploit the dynamic range of the camera and maximize the SNR, we balance the intensities of the beams on the two halves of the sCMOS camera by means of a halfwave plate placed before in the laser beam, before the GGD. The camera sensor is characterized by \(2560\times 2160\) pixels of size \(\delta _{\mathrm {p}} = 6.5\,\upmu \mathrm {m}\) and can work at up to 50 fps in fullframe mode (in global shutter mode, 100 fps with rolling shutter). Since the resolution \(\delta = 1.6\,\upmu \mathrm {m}\) on the object corresponds to a magnified resolution cell \(M\delta = 6.7\,\upmu \mathrm {m}\) on the sensor, data reported in Figs. 4, 5 were generally acquired with a \(2\times 2\) hardware binning; no binning was applied when acquiring data corresponding to points C, D, their primed counterparts, and E, in Fig. 3a. The test targets employed to acquire data reported in Fig. 3 are Thorlabs R3L3S1N and R1DS1N. The exposure time was set at \(\tau = 92.3\,\upmu \mathrm {s}\) to match the coherence time fo the source, and the acquisition rate of the camera to \(R = 120\,\mathrm {Hz}\), the maximum speed possible at our FOV in global shutter mode.
Noise mitigation in the CLM correlation function
All the refocused images reported in the article are obtained by applying the refocusing formula in Eq. (3) to the experimental fourdimensional correlation function. We also applied a correction to eliminate edgeeffects due to the size of the detectors, as described in Ref. ^{45}. The problem of the noisy background occurring in the refocused images of the biomedical phantom (Figs. 4, 5) was tackled by preprocessing the correlation function. The statistical noise, which is quantified by the variance of the quantity measured in Eq. (1), can be reduced by optimizing the correlation function with the introduction of an additional term; this approach is consistent with so called differential ghost imaging^{60}, where each pixel of \(\mathrm {D}_b\) is considered as a bucket detector. The mentioned correction consists in subtracting from the correlation function a spurious selfcorrelation between intensity fluctuations on each single pixel of the spatial detector \(\mathrm {D}_a\) and intensity fluctuations on the whole detector, thus obtaining the modified correlation
with \(I_a^{\mathrm {TOT}}\) the total intensity impinging on the detector \(\mathrm {D}_a\). The free parameter K can be fixed by the condition of minimizing the variance
of the modified correlation function. Derivation of \(\mathcal {F}(\varvec{\rho }_a,\varvec{\rho }_b)\) with respect to K immediately shows that the minimum is reached for
The different outcome of the analysis when considering the standard correlation function of Eq. (1) and the modified one of Eq. (4) can be found in the Supplementary Information.
Viewpoint multiplicity
The viewpoint multiplicity is defined as the effective number of viewpoints per transverse direction. In the case of CLM, the viewpoint multiplicity is estimated as the number of resolution cells falling within the diameter D of the objective lens. The resolution cell must be evaluated by considering that, in the correlation function, the sample acts as an aperture, and thus determines the resolution on the objective lens (see Ref. ^{44} for details). Considering a sample made of a bright object of diameter a, placed in the focal plane of the objective lens, the size of the resolution cell on the lens plane reads
Therefore, the viewpoint multiplicity can be evaluated as
This result highlights an interesting reciprocity relation between the two apertures and the two resolution cells, on the objective lens and the object plane. The above results, evaluated for an object in the focal plane, are still approximately valid if the object axial position z satisfies \(zf_O\ll f_O\).
Axial resolution
The DOF of each single refocus image provides information on the CLM axial resolution. As reported in the second column of Table 1, CLM is characterizd by the same DOF as a standard microscope in the objective focal plane, determined by the numerical aperture and wavelength of illumination. When the object is moved out of focus, the DOF of the refocused image can be determined by a geometricaloptics approach. If two point sources, placed on opposite sides of the optical axis and separated by a distance a along the transverse direction x, are located on a transverse plane placed at a distance z from the objective, the correlation function they generate reads
as given by Eq. (2), with \(\delta ^{(2)}(\varvec{\rho })\) the twodimensional Dirac delta and \(\hat{u}_x\) the unit vector along the xaxis. By refocusing the image described by Eq. (9) through the refocusing algorithm of Eq. (3), we see that, as soon as refocusing is implemented on an axial coordinate different from the one identifying the object position (\(f\ne z\)), each pointlike source generates a “circle of confusion” analogous to the one of conventional imaging. Unlike other CLM features, the circle of confusion depends on the numerical aperture of the CLM device. The refocused image at a generic distance \(f\ne z\) reads
Assuming the objective lens transmission function P is a circular iris of radius R, Eq. (10) represents two circles of radius \(MRfz/z\) and centered in \(\pm Mf/2z\,\varvec{a}\). Therefore, there will exist a range for which the refocusing parameter f generates two separate circles; outside of that range, the two circles begin to overlap and, ultimately, the two sources can no longer be resolved. Particularly, there will be two refocusing positions \(f^\prime\) and \(f^{\prime \prime }\) for which Eq. (10) describes two tangent circles. We thus define the DOF of the refocused images as
where the approximation holds for \(f^{\prime}f^{\prime\prime}\ll f_O\). Hence, depending on the size of the object details of interest, the images refocused by CLM have a DOF that depends on the numerical aperture in the same fashion as for a standard microscope. However, the ratio between the extended DOF available to CLM and the DOF of the refocused image gives the number of independent planes accessible through refocusing, which is:
This result shows that, as is the case of conventional lightfield imaging, the number of longitudinal planes that can be refocused is proportional to the viewpoint multiplicity on the lens plane. The advantage of CLM over conventional lightfield imaging is the larger accessible viewpint multiplicity, as already discussed in the “Introduction”.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Mertz, J. Strategies for volumetric imaging with a fluorescence microscope. Optica 6, 1261 (2019).
Oku, H., Hashimoto, K. & Ishikawa, M. Variablefocus lens with 1kHz bandwidth. Opt. Express 12, 2138 (2004).
MermillodBlondin, A., McLeod, E. & Arnold, C. B. Highspeed varifocal imaging with a tunable acoustic gradient index of refraction lens. Opt. Lett. 33, 2146 (2008).
Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, E. H. K. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305, 1007 (2004).
Fahrbach, F. O., Simon, P. & Rohrbach, A. Microscopy with selfreconstructing beams. Nat. Photonics 4, 780 (2010).
Fahrbach, F. O. & Rohrbach, A. A line scanned lightsheet microscope with phase shaped selfreconstructing beams. Opt. Express 18, 24229 (2010).
Vettenburg, T. et al. Lightsheet microscopy using an Airy beam. Nat. Methods 11, 541–544 (2014).
Chen, B. C. et al. Lattice lightsheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).
Chu, L. A. et al. Rapid singlewavelength lightsheet localization microscopy for clarified tissue. Nat. Commun. 10, 4762 (2019).
Takanezawa, S., Saitou, T. & Imamura, T. Wide field lightsheet microscopy with lensaxicon controlled twophoton Bessel beam illumination. Nat. Commun. 12, 2979 (2021).
Moneron, G. et al. Fast STED microscopy with continuous wave fiber lasers. Opt. Express 18, 1302–1309 (2010).
Zong, W. Fast highresolution miniature twophoton microscopy for brain imaging in freely behaving mice. Nat. Methods 14, 713–719 (2017).
Maurer, C., Khan, S., Fassl, S., Bernet, S. & RitschMarte, M. Depth of field multiplexing in microscopy. Opt. Express 18, 3023 (2010).
Waller, L. & Tian, L. Computational imaging: Machine learning for 3D microscopy. Nature 523, 416 (2015).
Adelson, E. H. & Wang, J. Y. Single lens stereo with a plenoptic camera. IEEE Trans. Pattern Anal. Mach. Intell. 14, 99 (1992).
Ng, R. et al. Light field photography with a handheld plenoptic camera. Comput. Sci. Tech. Rep. CSTR 2, 1 (2005).
Levoy, M., Ng, R., Adams, A., Footer, M. & Horowitz, M. Light field microscopy. ACM Trans. Graph. 25, 924 (2006).
Georgiev, T. G. & Lumsdaine, A. Focused plenoptic camera and rendering. J. Electron. Imaging 19, 021106 (2010).
Goldlücke, B., Klehm, O., Wanner, S. & Eisemann, E. Plenoptic cameras. In Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality (eds Magnor, M. et al.) (CRC Press, Cambridge, 2015).
Muenzel, S. & Fleischer, J. W. Enhancing layered 3d displays with a lens. Appl. Opt. 52, D97 (2013).
Minsky, M. Memoir on inventing the confocal microscope. Scanning 10, 128 (1988).
Dansereau, D. G. Pizarro, O. & Williams, S. B. Decoding, calibration and rectification for lenseletbased plenoptic cameras. In 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013).
Perez, J. et al. SuperResolution in plenoptic cameras using FPGAs. Sensors 14, 8669 (2014).
Li, Y., Sjöström, M., Olsson, R. & Jennehag, U. Scalable coding of plenoptic images by using a sparse set and disparities. IEEE Trans. Image Process. 25, 80 (2016).
Prevedel, R. et al. Simultaneous wholeanimal 3d imaging of neuronal activity using lightfield microscopy. Nat. Methods 11, 727 (2014).
Skocek, O. et al. Highspeed volumetric imaging of neuronal activity in freely moving rodents. Nat. Methods 15, 429 (2018).
Classen, A., von Zanthier, J., Scully, M. O. & Agarwal, G. S. Superresolution via structured illumination quantum correlation microscopy. Optica 4, 580 (2017).
Kviatkovsky, I., Chrzanowski, H. M., Avery, E. G., Bartolomaeus, H. & Ramelow, S. Microscopy with undetected photons in the midinfrared. Sci. Adv. 6, eabd0264 (2020).
Simon, D. S. & Sergienko, A. V. Twinphoton confocal microscopy. Opt. Express 18, 22147 (2010).
Monticone, D. G. et al. Beating the Abbe diffraction limit in confocal microscopy via nonclassical photon statistics. Phys. Rev. Lett. 113, 143602 (2014).
Samantaray, N., RuoBerchera, I., Meda, A. & Genovese, M. Realization of the first subshotnoise wide field microscope. Light Sci. Appl. 6, e17005 (2017).
Altmann, Y. et al. Quantuminspired computational imaging. Science 361, 6403 (2018).
D’Angelo, M., Pepe, F. V., Garuccio, A. & Scarcelli, G. Correlation plenoptic imaging. Phys. Rev. Lett. 116, 223602 (2016).
Pepe, F. V., Scarcelli, G., Garuccio, A. & D’Angelo, M. Plenoptic imaging with secondorder correlations of light. Quantum Meas. Quantum Metrol. 3, 20 (2016).
Pepe, F. V., Di Lena, F., Garuccio, A., Scarcelli, G. & D’Angelo, M. Correlation plenoptic imaging with entangled photons. Technologies 4, 17 (2016).
Pepe, F. V., Vaccarelli, O., Garuccio, A., Scarcelli, G. & D’Angelo, M. Exploring plenoptic properties of correlation imaging with chaotic light. J. Opt. 19, 114001 (2017).
Pepe, F. V. et al. DiffractionLimited Plenoptic Imaging with Correlated Light. Phys. Rev. Lett. 119, 243602 (2017).
Scala, G., D’Angelo, M., Garuccio, A., Pascazio, S. & Pepe, F. V. Signaltonoise properties of correlation plenoptic imaging with chaotic light. Phys. Rev. A 99, 053808 (2019).
Pittman, T. B., Shih, Y. H., Strekalov, D. V. & Sergienko, A. V. Optical imaging by means of twophoton quantum entanglement. Phys. Rev. A 52, R3429(R) (1995).
Gatti, A., Brambilla, E., Bache, M. & Lugiato, L. A. Ghost imaging with thermal light: comparing entanglement and classical correlation. Phys. Rev. Lett. 93, 093602 (2004).
D’Angelo, M. & Shih, Y. Quantum imaging. Laser Phys. Lett. 2, 567 (2005).
Valencia, A., Scarcelli, G., D’Angelo, M. & Shih, Y. Twophoton imaging with thermal light. Phys. Rev. Lett. 94, 063601 (2005).
Scarcelli, G., Berardi, V. & Shih, Y. Can twophoton correlation of chaotic light be considered as correlation of intensity fluctuations?. Phys. Rev. Lett. 96, 063602 (2006).
Scagliola, A., Di Lena, F., Garuccio, A., D’Angelo, M. & Pepe, F. V. Correlation plenoptic imaging for microscopy applications. Phys. Lett. A 384, 126472 (2020).
Massaro, G., Di Lena, F., D’Angelo, M. & Pepe, F. V. Effect of finitesized optical components and pixels on lightfield imaging through correlated light. Sensors 22, 2778 (2022).
Ng, R. Fourier slice photography. ACM Trans. Graph. 24, 735 (2005).
Katz, O., Bromberg, Y. & Silberberg, Y. Compressive ghost imaging. Appl. Phys. Lett. 95, 131110 (2009).
Liu, J., Zhu, J., Lu, C. & Huang, S. Highquality quantumimaging algorithm and experiment based on compressive sensing. Opt. Lett. 35, 1206 (2010).
Delahaies, A., Rousseau, D., Gindre, D. & ChapeauBlondeau, F. Exploiting the speckle noise for compressive imaging. Opt. Commun. 284, 3939 (2011).
Barbastathis, G., Ozcan, A. & Situ, G. On the use of deep learning for computational imaging. Optica 6, 921 (2019).
Phantom UltrahighSpeed. https://www.phantomhighspeed.com/products/cameras/ultrahighspeed. Accessed on 19 September 2021.
Bruschini, C., Homulle, H., Antolovic, I. M., Burri, S. & Charbon, E. Singlephoton avalanche diode imagers in biophotonics: Review and outlook. Light Sci. Appl. 8, 87 (2019).
Ulku, A. et al. Widefield timegated SPAD imager for phasorbased FLIM applications. Methods Appl. Fluoresc. 8, 024002 (2020).
Madonini, F., Severini, F., Zappa, F. & Villa, F. Single photon avalanche diode arrays for quantum imaging and microscopy. Adv. Quantum Technol. 4, 2100005 (2021).
Villa, F., Severini, F., Madonini, F. & Zappa, F. SPADs and SiPMs arrays for longrange highspeed light detection and ranging (LiDAR). Sensors 21, 3839 (2021).
Hanbury Brown, R. & Twiss, R. Q. A test of a new type of stellar interferometer on sirius. Nature 178, 1046 (1956).
Karmakar, S., Meyers, R. & Shih, Y. H. Ghost imaging experiment with sunlight compared to laboratory experiment with thermal light. Proc. SPIE 8518, 851805 (2012).
Liu, X.F. et al. Lensless ghost imaging with sunlight. Opt. Lett. 39, 2314 (2014).
Rust, M. J., Bates, M. & Zhuang, X. Stochastic optical reconstruction microscopy (STORM) provides subdiffractionlimit image resolution. Nat. Methods 3, 793–795 (2006).
Ferri, F., Magatti, D., Lugiato, L. A. & Gatti, A. Differential ghost imaging. Phys. Rev. Lett. 104, 253603 (2010).
Acknowledgements
The Authors thank Francesco Scattarella for making the videos reported in the Supplementary Information. This work was supported by Istituto Nazionale di Fisica Nucleare (INFN) project PICS, PICS4ME, TOPMICRO, and, partially, “Qu3D”, and by Ministero dell’Università e della Ricerca (MUR) PON ARS project “CLOSE – Close to Earth”. Project Qu3D is supported by the Italian Istituto Nazionale di Fisica Nucleare, the Swiss National Science Foundation (grant 20QT21\(\_\)187716 “Quantum 3D Imaging at high speed and high resolution”), the Greek General Secretariat for Research and Technology, the Czech Ministry of Education, Youth and Sports, under the QuantERA programme, which has received funding from the European Union’s Horizon 2020 research and innovation programme.
Author information
Authors and Affiliations
Contributions
M.D. and F.V.P. conceived the idea and wrote the paper. M.D., A.G. and F.V.P. supervised the work. A.S. contributed to the development of the theory and the algorithms, to the first design of the experimental setup, and, together with F.D., to the preliminary data acquisition. F.D. has contributed to the improvement of the setup and to the optimization of the data analysis program. G.M. has further optimized the data analysis and noise mitigation programs, contributed to writing the paper, and, in collaboration with D.G., performed the experiment. G.S. provided support for the design of the experiment, and contributed to writing the paper. All authors contributed to revising the manuscript, and approved its final version.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Massaro, G., Giannella, D., Scagliola, A. et al. Lightfield microscopy with correlated beams for highresolution volumetric imaging. Sci Rep 12, 16823 (2022). https://doi.org/10.1038/s41598022212401
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598022212401
This article is cited by

Correlatedphoton imaging at 10 volumetric images per second
Scientific Reports (2023)

Deep learning approach for denoising lowSNR correlation plenoptic images
Scientific Reports (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.