Abstract
We explain how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field. First, to select target objects, a 4D imaging light field is captured. A light field mask is then computed automatically for this selection to avoid illumination of the remaining areas. With onephoton illumination, simultaneous generation of complex volumetric light patterns becomes possible. As a full lightfield can be captured and projected simultaneously at the desired exposure and excitation times, short readout and lighting durations are supported.
Introduction
Fast optical volumetric recording, as required in numerous applications, still remains a challenge in microscopy. While regular scanning methods (e.g., confocal microscopy and its variants) can be too slow, specialized lightsheet scanning^{1}, 3D imaging with spatial light modulators (SLMs)^{2}, multifocus microscopy^{3}, and lightfield microscopy (LFM)^{4} promise faster scanning rates.
LFM^{5} captures the 4D incident light by means of a microlens array (MLA) placed at the intermediate image plane of the imaging path, and thus supports the computation of a focal stack with a single sensor recording. The limited spatial resolution that is caused by simultaneously multiplexing 2D location and 2D direction information at the same image sensor can be increased computationally by 3D deconvolution^{6} and by using additional phase masks in the optical path^{7}.
Besides fast readouts, precise controllable excitation is another requirement in recent microscopy techniques. In fluorescence microscopy, for instance, such methods are used to accurately illuminate only certain parts of a probe and to avoid outoffocus excitation or even damaging the probe by photobleaching or phototoxicity. Spatially controllable excitation can be achieved by sequentially illuminating the sample with a focused beam while either moving the sample or scanning the beam with a mirror^{8}. Multiple simultaneous beams can be generated with spatial light modulators (SLMs), such as digital micromirror devices^{9}, or phasemodulators (PMs), such as liquid crystal PMs used for holographic projection^{10}. The latter require coherent light produced by a laser light source.
Onephoton excitation as discussed above, however, also excites the probe in the light cone above and below the focal plane, as illustrated in Fig. 1a, which makes controlled excitation within a volumetric probe impossible. Outoffocus excitation can be avoided by twophoton techniques^{11,12,13,14}, as shown in Fig. 1b. This allows deeper penetration in scattering media than simple onephoton methods, but limits simultaneous excitation to a single plane.
Various lightmodulation approaches exist that strive for controlled, simultaneous excitation of multiple selected sample points within a volume. Combining two micromirror arrays in the illumination path of a microscope (one in the intermediate image plane and one in the back focal plane^{15,16}), for instance, enables the generation of spatialangular controlled light patterns to avoid outoffocus excitation. However, the interplay of two 2D SLMs does not support arbitrary illumination patterns within a volume. Either angular sampling is constant for a spatial location, or spatial sampling is constant for a direction. Thus, an illumination pattern as shown in Fig. 1c cannot be achieved simultaneously, but requires sequential excitation.
Holographic projection with PMs^{2,17,18} overcomes this primary limitation. However, spatially varying diffraction efficiency and the presence of zeroorder diffraction spots, ghosting, and intensity fluctuations (speckles) limit the excitation area and the spatial and temporal resolution of holographic projections^{19,20}. Furthermore, coherent computergenerated holograms limit the types of illumination patterns that can be generated^{21}.
We explain how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field^{22}. First, a 4D imaging light field is captured to select target objects. A light field mask is then computed automatically for this selection to avoid illumination of the remaining areas. For onephoton illumination, the simultaneous generation of complex volumetric illumination patterns, like the ones shown in Figs 1c and 7, becomes possible. In contrast to holographic projection approaches, lightfield projection supports arbitrary patterns and noncoherent light sources, and retains the same sampling quality across the whole field of view. Since a full light field can be captured and projected simultaneously at the desired exposure and excitation times, short readout and illumination durations are supported.
Masked LightField Illumination
Figure 2 illustrates the principle of volumetric excitation by means of masked lightfield illumination using 10–20 μm fluorescent microspheres. With one sensor recording, we capture a 4D light field of the probe under full illumination and apply synthetic aperture rendering^{23,24} to compute a 3D focal stack or individual perspective images from it. The focal stack undergoes 3D deconvolution^{25} to obtain a defocusfree zstack. Within the zstack, we select parts of the probe (i.e., 3Dsegmented microspheres, shown in red in our example) that are to be excited. From the selection in the zstack, we then determine a 4D lightfield mask that is projected simultaneously into the probe such that as much light as possible is concentrated at the volumetric positions of the selection while minimizing the illumination of other regions. Rerecording a light field using volumetric excitation shows selected regions in perspective images and zstack (Fig. 2 orange).
Computing the masked light field that leads to the desired volumetric excitation pattern is the goal of the following optimization: Let v_{i} be the illumination reaching volumetric point i, α_{i} the transmission of light at i, and r_{j} the light intensity of lightfield ray j, then β_{i,j} is the total transmission of ray j to volumetric point through all volumetric points on its path through the probe to i: β_{i,j} = α_{1} · α_{2} · … · α_{n}, where volumetric points 1, 2, …, n are on the path of ray j to volumetric point i. The light transport for a given light field through a given volume can then be described as:
where r = [r_{1}, r_{2}, …, r_{J}] is the vector of all lightfield ray intensities, v = [v_{1}, v_{2}, …, v_{I}] the vector of all light reaching each volumetric point, and B the transport matrix of all raytopoint transmission coefficients β_{i,j} through the probe. With known transport matrix B, the vector v can be defined to contain the coefficient 1 for all volumetric sample points to be excited, and the coefficient 0 for all sample points not to be excited. Note that we assume 1 and 0 to correspond to the maximum and minimum brightness of the light source, respectively. Volumetric sample points whose illumination is irrelevant, such as carrier material (e.g., index matching gel), must not be included in v. Solving Eq. 1 for r results in an illumination lightfield mask that maximizes irradiation in the selected points to be excited while minimizing the light in points that must not be excited. We apply the simultaneous algebraic reconstruction technique (SART)^{26} to solve Eq. 1.
For illumination light fields of J rays and volumes of I points, B is large and contains I × J coefficients. Despite being sparse, Eq. 1 is expensive to solve. For practical realtime applications, we approximate the illumination mask as follows:
We distinguish between three types of volumetric points: Those that should be excited (E, e.g., microspheres to be stimulated), those that should not be exited (N, e.g., microspheres not to be stimulated), and those for which illumination is irrelevant (O, e.g., carrier material such as an indexmatching gel).
We then determine all lightfield rays R that pass through points in E by raycasting the zstack, and consider three general applicationspecific illumination strategies: (S1) All rays in R equal 1 (which corresponds to a full illumination along the rays). (S2) Rays in R equal 1 only if they do not pass through points in N. (S3) Rays in R equal 1 only if they do not pass through points in N before they pass through a point in E. All other rays of the illumination light field are set to 0 (no light).
These three strategies are illustrated in Fig. 3 using the example of two occluding microspheres i and ii (Fig. 3a,b), and an optical axialslice simulation of eight excitation regions and eight nonexcitation regions (Fig. 3g–i). In Fig. 3c,d, we selected the foreground sphere i for excitation (i.e., to be part of E) and background sphere ii to remain unexcited (i.e., to be part of N), and vice versa in Fig. 3e,f. By choosing strategy S1, E receives a maximum of light while—due to partial occlusion and transparency of the microspheres/excitation regions—N is also fractionally illuminated (Fig. 3c,e,g). Strategy S2 avoids illumination of N entirely, but—depending on the degree of occlusion—it may result in lower excitation levels in E (Fig. 3d,f,h). Both strategies assume that the probes to be exited are not fully opaque but transmit a certain amount of excitation light. If probes can be considered to block light, then strategy S3 is appropriate (Fig. 3i). For our example in Fig. 3b, illuminating the nonoccluded microsphere i would result in the same outcome as shown in Fig. 3c (S1), while exciting the occluded microshpere ii results in Fig. 3f (S2). While computing an expensive optimization based on Eq. 1 (results are also shown in Fig. 3 for comparison) requires up to several hours (including generation of B), our raycasting approximation is achieved easily at interactive rates (see Method section).
Discussion
According to Sparrow’s criterion, the smallest resolvable spot for wavelength λ, numerical aperture NA, and magnification M is . For a lightfield microscope, the theoretical number of resolvable directions D equals the microlens pitch m divided by o. However, if the pixel pitch p of the SLM or camera used exceeds o, then D is limited to m/p. The lateral resolution in the field plane corresponds to the resolution of the MLA. The axial depth of field is S ≈ ((2 + D)λn)/(2NA^{2}), which corresponds to the smallest focus shift within a focusable range of F ≈ ((2 + D^{2})λn)/(2NA^{2}), where n is the refractive index of the imaging medium between the probe and the objective front lens (e.g., air n = 1, water n = 1.33, or oil n = 1.52)^{5}. The numerical aperture of the microlenses should match the numerical aperture of the objective (Fnumber is M/(2NA)^{5}).
Figure 4 illustrates the sampling nature of our approach based on an optical simulation of an empty volume (i.e., without occlusions) within a lateral field of view of 263 μm and an axially focusable range F = 22.2 μm, achieved with a 40×/0.95 NA objective, a m = 125 μm microlens pitch, and a λ = 470 nm excitation wavelength. The pointspread function (PSF) of the illumination focused on the field plane is shown in Fig. 4a. The average full width at half maximum (FWHM) over all axial distances is approximately L = 3.75 μm (lateral) × A = 5.1 μm (axial), and corresponds to the smallest focusable point size (i.e., highest sampling resolution) in this example. The smallest axial focus shift is S = 2.9 μm. Since the lightfield ray space (principal ray directions are illustrated in Fig. 4a) is discrete with a limited directional and spatial resolution, the raysampling density varies across the focusable range. The dimension of a focus point depends on the number of ray intersections at its position. Thus, the axial and lateral resolutions vary with the distance to the field plane, as shown in Fig. 4b,c. They range from 3.3 μm to 4.2 μm laterally and 3.5 μm to 7.2 μm axially for the example shown in Fig. 4. We are considering mean resolutions (dotted lines in Fig. 4c) in the remaining discussion.
While the lateral resolution of a lightfield microscope can be increased mainly by downscaling the microlens pitch of the MLA, the axial resolution is enhanced by reducing the pixel pitch of the DMD (down to the diffraction limit set by o), increasing the NA of the objective, and reducing the excitation wavelength.
Figure 5 illustrates the impact of different microlens pitches (m) for three different objectives. While decreasing m increases the mean lateral and axial resolutions (L, A), the focusable range (F) is strongly reduced. The simulations in Fig. 4 uses a 40×/0.95 NA objective with a m = 125 μm microlens pitch. Thus, the following sampling is achieved: o = 9.3 μm, p = 13.7 μm, D = 9.12, S = 2.9 μm, F = 22.20 μm. The simulated PSF size for this configuration is lateral L = 3.75 μm and axial A = 5.1 μm, which results in a sampling density of 13.94 × 10^{−3} PSFs per μm^{3} within the excitable volume. The same 40×/0.95 NA objective with a m = 150 μm microlens pitch results in the following sampling parameters: o = 9.3 μm, p = 13.7 μm, D = 10.9, S = 3.37 μm, F = 31.74 μm. The PSF size for this configuration is lateral L = 4.5 μm (4.0–5.2 μm) and axial A = 7.5 μm (6.4–10.4 μm), which results in a sampling density of 6.63 × 10^{−3} μm^{−3}.
Figure 6 illustrates the excitation contrast for the three masking strategies (S1, S2 and S3) and a varying number of excitation points (E) and nonexcitation points (N). The points are randomly distributed based on a Poisson disk sampling^{27}. Figure 6a shows an optical simulation with a density of 0.5 × 10^{−3} μm^{−3}, while Fig. 6b simulates densities of 1.45 × 10^{−3} μm^{−3}. For all three strategies the contrast decreases as the number of excitation positions E increases (consequently the number of N decreases). A high contrast indicates low light pollution at N and high excitation levels at E. Within transparent nonscattering volumes, strategy S2 generates the highest and S1 the lowest contrast. The simulation volumes and the resulting volumetric illumination patterns for strategy S2 are illustrated in Fig. 7a,b.
Our approach is currently limited to transparent and nonscattering probes, due to the usage of singlephoton illumination. In the future we would like to investigate the combination of our method with twophoton excitation to support applications in dense scattering volumes. However, single shot twophoton light field acquisition is not possible. Therefore, alternative methods for focal stack recording—preliminary to volumetric twophoton lightfield excitation—have to be employed. Furthermore, for twophoton excitation light efficient optics and highpower lasers are required.
Our experimental (Fig. 2) and simulated results (Figs 6 and 7) indicate that our technique might have potential applications in the field of optogenetics^{28,29}. Typical neuron sizes and cell densities for model organisms—i.e., C. elegans and zebrafish larvae—are in the range of – μm and 0.15–6.44 × 10^{−3} μm^{−3}, respectively. However, currently we neglect some effects occurring in organic probes, such as scattering, natural fluctuations in cell size and density, and the neuron’s activation sensitivity. Future experiments on actual organic tissue will provide further insights.
Method
Our proofof concept prototype is a replica of the lightfield microscope from Levoy et al.^{22} and is based on a Nikon Eclipse 80i fluorescence microscope. Our prototype equips a 20×/0.75 NA and a waterimmersion 60×/1.2 NA objective (n = 1.33). For controlled and repeatable experiments, we applied commercially available fluorescence microspheres (100 μm and 10–20 μm) and bedded them in a silicone elastomer carrier. The application to other scales requires different MLAs or objectives and is discussed in the optical simulations above.
The prototype and the optical layout are shown in Fig. 8. The illumination system employs a squaresided, planoconvex microlens array (MLA2) at the intermediate image plane and a DMDSLM (Texas Instruments Digital Light Processor driven by a Discovery 1100 Controller from Digital Light Innovations, pixel size p_{il} = 13.7 μm) with a resolution of 1024 × 768. As a light source we use a 120 W metal halide lamp (XCite 120, EXFO), with corresponding filters. For capturing light fields, the imaging system uses an MLA (MLA1) at the intermediate image plane. A monochrome CCD camera (Retiga 4000R, pixel size p_{il} = 7.4 μm) with 2048 × 2048 resolution is used for recording. The nonsquare illumination area, caused by the DMD, crops the usable field of view. For our experiments in Fig. 2 we used the 60 /1.2 NA objective and for the results in Fig. 3 we use the 20×/0.75 NA objective. The achieved sampling parameters with these configurations are shown in Table 1. The illumination loss of our prototype setup from light source to probe is approx 90%.
Eq. 1 assumes optical symmetry between the imaging and illumination light fields. In practice, however, slight misalignments in the optical paths and different microlenses cause misalignments and multimappings between illumination and imaging rays. To consider this, we calibrate the mapping between both by onetime measurement of a transport matrix T between the illumination light field r′ and the imaging light field r using a front surface mirror (with a 0.17 mm cover slip on top, placed perpendicular underneath the objective) using structuredlight calibration^{30}. The matrix T contains in each column the transport coefficients for one illumination ray r′_{s,t,u,v} to all imaging rays r_{s,t,u,v}. Thus, the size of T equals the number of illumination rays (number of columns) times the number of imaging rays (number of rows). Note that, in order to compensate for the calibration with a front surface mirror, the angular coordinates of the illumination rays are inverted (i.e., r′_{s,t,u,v} becomes r′_{s,t,−u,−v}) before their coefficients are stored in T. Eq. 1 then extends to v = BTr′, assuming that the zstack from which B is determined is defined in the imaging ray space of r. The same applies to our approximated masks: Light fields in imaging ray space can be mapped to illumination ray space (or vice versa) by multiplying with T (or with T^{T}). Note that T is constant for a constant optical configuration and is invariant to focus changes of the microscope. In the masking experiments with our 20× and 60× prototype B is large and contains 1.4 10^{12} and 4.31 10^{11} coefficients, respectively.
Our prototype implementation uses custom software^{5} to drive the microscope, Matlab for processing tasks (i.e., deconvolution, solving Eq. 1) and ImageJ for volume segmentation (i.e., selecting E). In Fig. 2 the zstack size is 55 × 45 × 61 and sparse (i.e., a high number of volumetric points i is zero). For this setup our approximate solution of Eq. 1 was calculated in 7–10 seconds on an Intel Core i7 CPU 2.7 GHz and 24 GB of RAM. Computation times are independent of the number of nonzero volumetric points. In our approximation the illumination light field is constructed by raycasting its rays j in the volume (zstack) and applying the rules of the masking strategies (S1, S2 and S3) as discussed above. Note, that the resulting light field is binary—0 for no light and 1 for light. We estimate that a GPU implementation of our approximation can lead to significant speedup (at least ×10). When solving Eq. 1 with SART the number of nonzero volumetric points affects the performance. Runtimes ranged from 4 minutes for 0.16% nonzero points (E and N) up to 8 hours for 12.25% nonzero points. Note that most of the computation time is spend on construction of the transport matrix B (e.g., 8 hours for construction and 1 minute for solving with SART). The matrix B is constructed by raycasting each lightfield ray j in the volume (zstack) and setting the corresponding total transmission values β_{i,j} in the matrix. The computed illumination light field consists of continues values for . In our experiments we used 500 SART iterations and a single transmission value α for all nonempty volumetric points (E or N).
The samples in Fig. 2 were Fluorescent Green Polyethylene 10–20 μm microspheres (material density 0.99–1.01 g/cm^{3}; peak excitation 470 nm; peak emission 505 nm; distributor cospheric) mixed with polydimethylsiloxane (Elastosil RT 604 from Wacker) of material density 0.97 g/cm^{3} (mixing ratio was 6.5% of spheres). In Fig. 3 we used Rhodamine B Polyethylene 100 μm microspheres (material density 0.98 g/cm^{3}; peak excitation 540 nm; peak emission 630 nm; distributor cospheric) mixed with the same silicone elastomer (as above) at a mixing ratio of 5% spheres.
Additional Information
How to cite this article: Schedl, D. C. and Bimber, O. Volumetric LightField Excitation. Sci. Rep. 6, 29193; doi: 10.1038/srep29193 (2016).
References
 1.
Bouchard, M. B. et al. Swept confocallyaligned planar excitation (scape) microscopy for highspeed volumetric imaging of behaving organisms. Nat. Photonics 9, 113–119 (2015).
 2.
Quirin, S., Peterka, D. S. & Yuste, R. Instantaneous threedimensional sensing using spatial light modulator illumination with extended depth of field imaging. Opt. Express 21, 16007–16021 (2013).
 3.
Abrahamsson, S. et al. Fast and sensitive multicolor 3d imaging using aberrationcorrected multifocus microscopy. Nat. Methods 10, 60–63 (2013).
 4.
Prevedel, R. et al. Simultaneous wholeanimal 3d imaging of neuronal activity using lightfield microscopy. Nat. Methods 11, 727–730 (2014).
 5.
Levoy, M., Ng, R., Adams, A., Footer, M. & Horowitz, M. Light field microscopy. ACM Trans. Graph. 25, 924–934 (2006).
 6.
Broxton, M. et al. Wave optics theory and 3d deconvolution for the light field microscope. Opt. Express 21, 25418–25439 (2013).
 7.
Cohen, N. et al. Enhancing the performance of the light field microscope using wavefront coding. Opt. Express 22, 24817–24839 (2014).
 8.
Shepherd, G. M., Pologruto, T. A. & Svoboda, K. Circuit analysis of experiencedependent plasticity in the developing rat barrel cortex. Neuron 38, 277–289 (2003).
 9.
Guo, Z. V., Hart, A. C. & Ramanathan, S. Optical interrogation of neural circuits in caenorhabditis elegans. Nat. Methods 6, 891–896 (2009).
 10.
Lutz, C. et al. Holographic photolysis of caged neurotransmitters. Nat. Methods 5, 821–827 (2008).
 11.
Nikolenko, V. et al. SLM microscopy: scanless twophoton imaging and photostimulation with spatial light modulators. Front. Neural Circuits 2, doi: 10.3389/neuro.04.005.2008 (2008).
 12.
Papagiakoumou, E. et al. Scanless twophoton excitation of channelrhodopsin2. Nat. Methods 7, 848–854 (2010).
 13.
ReutskyGefen, I. et al. Holographic optogenetic stimulation of patterned neuronal activity for vision restoration. Nat. Commun. 4, 1509 (2013).
 14.
Packer, A. M., Russell, L. E., Dalgleish, H. W. & Häusser, M. Simultaneous alloptical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nat. Methods 12, 140–146 (2015).
 15.
Rückerl, F. et al. Micro mirror arrays as highresolution spatial light modulators for photoactivation and optogenetics. In SPIE BiOS 85860U–85860U (International Society for Optics and Photonics, 2013).
 16.
Rückerl, F., Berndt, D., Heber, J. & Shorte, S. Photoactivation and optogenetics with micro mirror enhanced illumination. In SPIE Photonics Europe 913017–913017 (International Society for Optics and Photonics, 2014).
 17.
Packer, A. M. et al. Twophoton optogenetics of dendritic spines and neural circuits. Nat. Methods 9, 1202–1205 (2012).
 18.
PaluchSiegler, S. et al. Alloptical bidirectional neural interfacing using hybrid multiphoton holographic optogenetic stimulation. Neurophotonics 2, 031208–031208 (2015).
 19.
Golan, L., Reutsky, I., Farah, N. & Shoham, S. Design and characteristics of holographic neural photostimulation systems. J. Neural Eng. 6, 066004 (2009).
 20.
Vaziri, A. & Emiliani, V. Reshaping the optical dimension in optogenetics. Curr. Opin. Neurobiol. 22, 128–137 (2012).
 21.
Zhang, Z., Barbastathis, G. & Levoy, M. Limitations of coherent computer generated holograms. In Digital Holography and ThreeDimensional Imaging DTuB5 (Optical Society of America, 2011).
 22.
Levoy, M., Zhang, Z. & Mcdowall, I. Recording and controlling the 4d light field in a microscope using microlens arrays. J. Microsc. 235, 144–162 (2009).
 23.
Isaksen, A., McMillan, L. & Gortler, S. J. Dynamically reparameterized light fields. In SIGGRAPH 297–306 (ACM, 2000).
 24.
Levoy, M. et al. Synthetic aperture confocal imaging. ACM Trans. Graph. 23, 825–834 (2004).
 25.
Holmes, T. J. et al. Light microscopic images reconstructed by maximum likelihood deconvolution. In Pawley, J. B. (ed.) Handbook of biological confocal microscopy 389–402 (Springer, 1995).
 26.
Andersen, A. & Kak, A. C. Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm. Ultrason. Imaging 6, 81–94 (1984).
 27.
Cook, R. L. Stochastic sampling in computer graphics. ACM Trans. Graph. 5, 51–72 (1986).
 28.
Lima, S. Q. & Miesenböck, G. Remote control of behavior through genetically targeted photostimulation of neurons. Cell 121, 141–152 (2005).
 29.
Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G. & Deisseroth, K. Millisecondtimescale, genetically targeted optical control of neural activity. Nat. Neurosci. 8, 1263–1268 (2005).
 30.
Sen, P. et al. Dual photography. ACM Trans. Graph. 24, 745–755 (2005).
Acknowledgements
We thank Marc Levoy and Gordon Wetzstein of Stanford University, Enrico Geissler of Carl Zeiss AG, and Bettina Heise and Franz Josef Hiptmair of Johannes Kepler University Linz for their support and for providing optical equipment.
Author information
Affiliations
Institute of Computer Graphics, Johannes Kepler University, Linz, 4040, Austria
 David C. Schedl
 & Oliver Bimber
Authors
Search for David C. Schedl in:
Search for Oliver Bimber in:
Contributions
D.C.S. implemented the prototypes (soft and hardware), performed the physical experiments, and collected measurements; O.B. developed the overall concepts, supervised the experiments and the implementations, and wrote the paper; both authors reviewed the manuscript.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Oliver Bimber.
Supplementary information
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Further reading

Compressive Volumetric LightField Excitation
Scientific Reports (2017)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.