To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.
Computational microscopy modalities are becoming more and more powerful thanks to the rapid improvements in digital imaging chips, graphics processing units as well as emerging image reconstruction methods that enable high-resolution imaging over large sample areas and volumes1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19. Among these different computational microscopy techniques, digital holography is one of the most widely explored modalities as it permits high-throughput 3D imaging of phase and amplitude information of specimen20,21,22,23,24,25,26,27. Holographic microscopy in general demands spatial and temporal coherence of illumination, although partially-coherent or even incoherent sources can also be utilized in certain imaging designs. To achieve color imaging in digital holography various methods have been employed2,25,28,29,30,31,32,33,34. One of the most commonly used approaches sequentially captures three holograms at different wavelengths, at red (e.g., 610–650 nm), green (e.g., 520–560 nm) and blue (e.g., 450–480 nm) parts of the spectrum, and then digitally cross-registers and combines these holograms to reconstruct a color image of the specimen25,29,30,31,33,34.
As an alternative to this sequential color illumination method, simultaneous multi-wavelength illumination of the sample has also been utilized in combination with a color imager chip (e.g., with a Bayer color-filter array, CFA) to digitize the resulting multi-color hologram in one snap-shot29,30. Using the known transmission spectra of the red (R), green (G) and blue (B) filters of the Bayer CFA, three sets of holograms corresponding to three unique wavelengths can be digitally retrieved through an inverse mapping (i.e., de-multiplexing) algorithm29. Compared to sequential color illumination, this simultaneous illumination approach saves experimental time through digital de-multiplexing of color channels; however, the reconstructed color images are lower resolution and exhibit color artifacts. Unlike natural images, holograms contain rapidly changing oscillations/fringes and since different channels of the color filters of a Bayer pattern are not exactly at the same spatial location, the traditional Bayer demosaicing process, when dealing with the sharp oscillations of a hologram, causes severe fringe artifacts29,30,35,36, which become even more noticeable for wide-field holographic imaging systems with large effective pixels or small magnifications. To better handle such sampling artifacts, different Bayer demosaicing approaches have also been proposed35,36,37,38,39, however, these methods are still short of creating an artifact-free de-multiplexing of holographic high frequency fringes created by multi-wavelength illumination.
To tackle these sampling and de-multiplexing related challenges in holographic color imaging, here we introduce a new high-resolution color microscopy technique termed Demosaiced Pixel Super-Resolution (D-PSR). In this D-PSR approach, we first capture a set of raw holograms on a Bayer color sensor chip using simultaneous multi-wavelength illumination, where the sensor plane is shifted by small increments. We then perform pixel super-resolution based on these sub-pixel shifted raw holograms40,41 to digitally synthesize smaller pixels (e.g., by a factor of ~3 fold) for each element of the Bayer CFA. Using the pre-calibrated spectral cross-talk matrix of each filter of the Bayer CFA at our selected illumination wavelengths, we de-multiplex three color channels, each of which is also pixel super-resolved. As will be demonstrated in our Results section, this D-PSR approach solves Bayer CFA related spatial sampling limitations and color artifacts of previous color de-multiplexing approaches, significantly improving the performance of holographic high-resolution color imaging.
For experimental demonstration of our D-PSR approach we selected lens-free holographic on-chip imaging, where the sample is placed on the top of a Bayer image sensor chip, typically at a distance of ~0.3–1 mm away from the chip surface. In this unit magnification transmission imaging set-up on a chip, the sample field-of-view (FOV) is equal to the active area of the sensor chip, which is typically ~20–30 mm2 using a state-of-the-art CMOS imager chip. As a result of this unique imaging configuration, the FOV and resolution are decoupled from each other and partially coherent sources can be utilized to push the resolution of the reconstructed holograms to the diffraction limit27,34,42,43. Another important advantage of this on-chip holographic imaging approach is the compactness and cost-effectiveness of its set-up, which makes it highly suitable for telemedicine applications and field use. Since this is an in-line holographic imaging geometry, the twin-image noise that is characteristic of an in-line set-up needs to be eliminated, and we used multi-height based phase retrieval2,44,45 for this purpose. Through these on-chip microscopy experiments, we demonstrated that D-PSR achieves a color imaging performance that is comparable to sequential illumination of the sample at three distinct wavelengths (corresponding to R, G and B channels) and therefore improves the overall speed of holographic color imaging. Finally we emphasize that this D-PSR technique is broadly applicable to any holographic microscopy application (lens-based or lens-free), where high-resolution imaging and simultaneous multi-wavelength illumination are sought.
Optical setup and data acquisition
As shown in Fig. 1, we use an in-line holographic lens-free on-chip imaging geometry. A broadband source (WhiteLase-Micro; Fianium Ltd, Southampton, UK) is filtered by an acousto-optic tunable filter down to ~5 nm bandwidth and is coupled to a single mode fiber to generate partially-coherent illumination across the field-of-view (~20 mm2) of our lens-free imaging setup. This source can simultaneously output up to 8 wavelength channels into the same fiber optic cable, and we used three spectral bands in our experiments (i.e., ~470 nm, ~530 nm, ~625–630 nm) to create multi-color illumination. This multiplexed partially coherent light, coming out of the fiber optic cable, propagates ~6 cm, and impinges on the specimen plane. Each of the three wavelength channels is partially diffracted by the sample and generates three independent in-line holograms to be sampled by a Bayer color CMOS sensor (16.4 Mpixel, 1.12 μm pixel size, Sony Corp., Japan), which is placed ~0.4 mm below the sample plane. The pixels of this color CMOS imager chip have four channels, namely B (Blue), G1 (Green 1), G2 (Green 2), and R (Red), which form a 2 × 2 period of the Bayer pattern on the imager chip. These filters have different transmission spectra, which will be detailed later on, and this information is crucial for spectral de-multiplexing of the acquired holograms. The color CMOS sensor chip is also mounted on a computer controlled 3D motion stage to permit: (1) lateral sub-pixel shifts between the imager chip and the sample hologram which is used to generate pixel super-resolved holograms, and (2) axial modulation of the sample-to-sensor distance which is used for multi-height based phase retrieval (i.e., twin-image elimination). The entire data acquisition process is automated by a custom-developed LabVIEW program.
The reconstruction algorithm of D-PSR (Fig. 2) starts with the implementation of pixel super-resolution. Pixel super-resolution is a technique that deals with the spatial under-sampling problem in an imaging system40,41, in which a series of sub-pixel shifted low resolution images are acquired to digitally synthesize a high resolution image of the object, significantly increasing the space-bandwidth product of the imaging system27. In our experiments, to achieve pixel super-resolution, the stage was programmed to move the sensor laterally on a 6 × 6 grid and at each grid point a low-resolution raw hologram is captured. Each recorded raw hologram intensity is then separated into four Bayer channels (namely B, G1, G2, and R) and for each one of these channels, we used a conjugate-gradient-based pixel super-resolution method40,41 to synthesize a super-resolved hologram with an effective pixel size of ~0.37 μm at each sample-to-sensor height. The spatial location of each channel with respect to the others is also taken into account and digitally corrected for; therefore this pixel-super resolution step enables all the Bayer channels (B, G1, G2 and R) to be virtually super-imposed onto each other, which is important to mitigate the artifacts in the subsequent demosaicing steps.
De-multiplexing of pixel super-resolved holograms
The transmission spectra of the four Bayer channels on a color CMOS imager contain considerable color cross-talk among channels (see e.g., Fig. 3(a)). For each pixel of the imager chip, we can formulate this spectral cross-talk as a matrix (W), such that:
where MB, MG1, MG2, and MR correspond to the pixel super-resolved intensity values for each channel (i.e., the output of the previous sub-section), and IB IG and IR refer to the de-multiplexed holograms corresponding to the three illumination wavelengths, before the spectral mixing occurred at the color sensor chip. The entries of the cross-talk matrix are determined by the transmission spectra of the Bayer CFA. Although the transmission spectrum of each Bayer filter is usually provided by the manufacturer of the sensor-array, we experimentally calibrated it to get more accurate results. For this purpose, we first recorded the background (i.e., object-free) response of the sensor chip from 400 nm to 700 nm at 5 nm steps. A 400-by-400 pixel region at the center of the sensor chip is averaged for each channel, and after normalization of the illumination power at each wavelength, measured using a power-meter (Thorlabs PM100, S120UV sensor head), the resulting curve for each channel is then taken as the spectral response of each Bayer filter on the imager chip (see e.g., Fig. 3(a)). We should emphasize that for a given color imager chip, these spectral cross-talk calibration curves need to be measured only once. Based on these measured spectra, the cross-talk matrix (W in Eq. (1) can be inferred for any arbitrary set/choice of illumination wavelengths that are multiplexed in our holographic color imaging experiments (see e.g., Table 1).
Based on this spectral cross-talk matrix, the de-multiplexed holograms corresponding to the three simultaneous illumination wavelengths in our holographic imaging set-up can then be determined through a left inverse operation:
where the superscript -1 refers to the inverse and T refers to the transpose of a matrix.
Multi-height based phase retrieval
One drawback of in-line holographic imaging geometry is its twin image noise. Additional constraints, such as the object support46, sparsity5,47,48 or multiple measurements at different heights44 or illumination angles34 can be employed to eliminate the twin image noise. For spatially dense and connected objects we usually utilize multi-height based phase retrieval because it is relatively hard to define an object support for such connected samples. In this multi-height based iterative phase retrieval algorithm2,44,45, we start from one of the pixel super-resolved holograms and digitally propagate it to the next measurement height, where we replace the amplitude of the field with the measured amplitude, and then propagate it to the next height until we reach the last measurement. The same process is repeated backward and then forward for e.g., 20–30 iterations. Each wave propagation operation is done using the angular spectrum method49. For faster convergence, we also use the solution to the transport-of-intensity equation (TIE) as our initial phase guess for multi-height phase retrieval2,50. In the experiments reported in this manuscript, we measured holograms at 4 consecutive heights that are axially separated by ~30 μm.
Saturation correction in digitization of wavelength-multiplexed holograms
When using the D-PSR approach for imaging of biological samples, a saturation-related de-multiplexing color artifact can sometimes be observed, as also illustrated in Supplementary Fig. S1. Although pixel saturation can be avoided by reducing the exposure time to a point where no pixels are saturated, this will result in unacceptable loss of information, as most of the pixels will then use only a small portion of the dynamic range. Alternatively, here we use a Bayesian-estimation-based saturation correction algorithm51, which uses the unsaturated pixels from other color channels at the same physical location to get a better estimate of the saturated pixels. It is theoretically proven that, using this Bayesian estimation approach, the corrected image will always have a smaller error than the uncorrected saturated one52.
Below, we will detail the use of this saturation correction method in our D-PSR approach. We assume that for a given raw image, the pixel values of different color channels follow a normal distribution:
where and denote pixel values of saturated and unsaturated channels, and represent their mean, respectively, and , , and represent their covariance. The saturated channel can be replaced by its statistical expectation, using the known non-saturated channel measurements at the same pixel location:
Note that since the spectral response of G1 and G2 channels are nearly identical, we took the average of these two super-resolved channels and treated it as the same channel G - only for this saturation correction step. We numerically implemented this saturation correction algorithm (see Supplementary Fig. S2) in five steps, as follows:
Step 1. Estimate the a-priori mean and co-variance of the unsaturated pixel values of R, G and B channels:
where is the total number of un-saturated pixels in the image, is a vector that represents the pixel values of R, G, B channels at pixel location .
Step 2. After defining a saturation level s, the distance of all the channels to the saturation level (s) can be determined as:
where and define the mean and the variance of all the unsaturated pixels in color channel , respectively. We typically choose s = 1020 for our 10 bit depth image sensor.
Step 3. Start from the most saturated channel, i.e. the channel that has the smallest distance to the saturation level, and replace the values of these saturated pixels with the expectation value calculated using Eq. (4). All the pixels in the other two un-corrected channels are taken as valid pixels.
Step 4. Correct the second most saturated channel using Eq. (4), taking the corrected most saturated channel and the other un-corrected channel as valid pixels.
Step 5. Correct the third (last) saturated channel using Eq. (4), taking the corrected values of the first and the second most saturated channels as valid pixels.
We typically run Steps 3–5 iteratively (e.g., for 3 iterations) to get improved results. As illustrated in Supplementary Fig. S1, the de-multiplexing color artifacts shown in the first column are greatly alleviated with this additional saturation correction step (second column), resulting in a reconstructed color image that is similar to a sequentially taken RGB image (third column).
White-balancing of wavelength-multiplexed holograms
Although the power levels of the multiplexed illumination wavelengths during our measurements are adjusted so that their detected intensities are very close to each other, there are still small uncontrolled variations among color channels. To correct for these power variations, we first choose a uniform background (empty) region of the captured hologram, calculate the average of each Bayer channel within this selected region which is taken as the relative power level of each illumination wavelength. All the reconstructed holographic images are then normalized using these calculated power ratios to get a white-balanced image.
Optimization of the choice of illumination wavelengths in D-PSR
Typically three illumination wavelengths are multiplexed in our D-PSR experiments, which are assigned to B, G and R channels, respectively. In this work, we also addressed the following question: if one could arbitrarily choose these three illumination wavelengths, what would be the optimal wavelength range for each source to be multiplexed? Intuitively, the optimality of the selected wavelengths depends on the transmission spectra (i.e., wavelength cross talk) of the color filters on the imager chip, as well as the transmission characteristics of the specimen to be imaged. Since we aim here general purpose microscopic imaging, we do not consider optimization of the illumination as a function of the sample spectral characteristics, and therefore only consider the transmission spectra of the CFA on the imager chip.
If the multiplexed channels are chosen to be too close in wavelength, the cross-talk among them will be too strong, and the illumination power of one or more channels needs to be reduced to accommodate the finite bit-depth of the digital sensor, which in turn will cause loss of spatial information. To better understand how this de-multiplexing error varies according to the selection of the multiplexed illumination wavelengths, we conducted a brute-force search of all the possible wavelength combinations for the spectral range of 400 nm to 700 nm with 1 nm step size and compared the resulting de-multiplexing errors (see the Supplementary Information and Supplementary Fig. S3 for further details). As illustrated in Supplementary Fig. S4, the differences among the de-multiplexing errors for different wavelength combinations are smaller than 6% over a large spectral range (>50 nm), which also contains the typical choice of red (610–650 nm), green (520–560 nm) and blue (450–480 nm) illumination bands. Based on this, we concluded that for a typical Bayer sensor, like the sensor we used in this work, the range of wavelength combinations that can be used for simultaneous illumination of the sample is rather large.
Results and Discussion
When we multiplex the illumination wavelengths and simultaneously record the resulting holograms using a Bayer imager chip, there will be mainly two types of artifacts generated: (1) the spectral cross-talk among different Bayer filters will create pixel level mixing of holographic information of different illumination wavelengths (see e.g. Fig. 3(a)); and (2) spatial demosaicing artifacts will be created because the Bayer mosaicing geometry has 4 color channels (B, G1, G2, and R) that are spatially separated by one pixel shift, and requires the interpolation of neighboring pixels for the missing spatial information, which gives rise to fringe artifacts in holograms (see Fig. 4(a)). Conventional demosaicing techniques35,53 employed in digital cameras and photography literature rarely suffer from these artifacts as most natural images are spatially smooth. However, when dealing with multi-color digital holographic microscopy, the recorded holograms contain rapid oscillations and fringes, and therefore using a conventional demosaicing approach will result in severe color artifacts.
The first problem listed above, i.e., the spectral cross-talk issue, can generate strong high-frequency artifacts if left uncorrected. Experimental examples of these artifacts are illustrated in Fig. 3(b,c), where we show pixel super-resolved holographic image reconstructions without de-multiplexing and compare them against our D-PSR results (Fig. 3(d,e)), which show significant improvements especially in high-resolution features. As detailed in our Methods section, this issue can be tackled by digital de-multiplexing (through Eq. (2)). However, if we perform this de-multiplexing step directly on demosaiced Bayer pixels (i.e., without pixel super-resolution), it will also generate color artifacts for holographic imaging at interpolated fringes (e.g., see Fig. 4(a)), and such fringe artifacts at the hologram plane will spread out to the whole reconstructed image and generate severe rainbow artifacts, as can be seen in Fig. 4(b,e). D-PSR results for the same samples (Fig. 4(c,f)) show significant improvements and suppression of such color artifacts, in addition to having much better spatial resolution compared to interpolation-based de-multiplexing results shown in Fig. 4(b,e).
Next, we imaged color-stained Papanicolaou smears (Pap smears) that are frequently used for screening of cervical cancer in order to compare the color imaging performance of D-PSR against some of the previously reported holographic color imaging techniques, including sequential RGB imaging29,34 and YUV color-space averaging32. As illustrated in the experimental comparison that is provided in Fig. 5, D-PSR has a very similar color imaging performance compared to sequential RGB imaging; however, by benefiting from simultaneous multi-wavelength illumination, D-PSR uses 3 fold less number of measurements compared to sequential color imaging, which makes it much more data efficient and faster. YUV color-space averaging, on the other hand, acquires a similar number of raw measurements/holograms compared to D-PSR, i.e., N + 3 vs. N, respectively, where N is the number of raw measurements that D-PSR uses. But the color imaging performance of YUV color-space averaging technique is inferior to D-PSR as it shows color bias and artifacts, also causing color leakage at the borders of rapid spatial transitions as illustrated in Fig. 5(c,g,k). In the last column of Fig. 5, microscopic images of the same sample regions of interest taken using a 40× 0.75 NA objective-lens are also shown for comparison. Note that the lens-based microscope images are blurred in some regions because of the limited depth-of-focus compared to lens-free microscopy images. Furthermore, to emphasize the large FOV advantage of lens-free on-chip microscopy, typical FOVs of 40× and 20× objective-lenses are also shown in Fig. 5(a).
We should also note that, in addition to 3 fold imaging speed improvement and reduced number of measurements compared to sequential color illumination, there are other reasons that sometimes simultaneous multi-wavelength illumination is preferred and D-PSR could be applied. For example, in imaging flow-cytometry systems, specimens (e.g., parasites or cells of interest) are constantly moving in a flow, and a motion-based PSR approach54 can be combined with D-PSR to get color images of the flowing micro-objects without the need for sequential multi-color illumination, which would directly improve the flow rate and the throughput of the imaging cytometer.
Finally, it is important to emphasize that the use a color (e.g., a Bayer RGB) sensor-chip, as compared to a monochrome sensor, has several advantages for holographic microscopy applications. First, color imagers are much more cost-effective compared to their monochrome versions due to economies of scale and their massive volume in consumer electronics market, especially in mobile-phones. Second, most of these small pixel pitch CMOS imager chips, including the one that is used in this work with ~1.1 μm pixel size, are not available for sale in monochrome format, which limits the spatial resolution that one can achieve using on-chip microscopy techniques with a monochrome chip. And third, the Bayer CFA of a color imager chip provides an excellent framework for multiplexed high-resolution color imaging of specimen as demonstrated in this manuscript.
We demonstrated a new holographic color imaging technique that is termed Demosaiced Pixel Super-Resolution (D-PSR) that merges Bayer demosaicing and pixel super-resolution steps to achieve wavelength-multiplexed holographic color imaging. Demosaicing induced holographic color artifacts that arise due to limited spatial sampling at a Bayer CFA are significantly alleviated in D-PSR through the digital synthesis of spatially overlapping and much smaller effective pixels in each color channel. Furthermore, in D-PSR the pixel-level spectral cross-talk of a Bayer CFA is compensated by digital de-multiplexing. Compared to holographic color imaging using sequential multi-wavelength illumination, this new approach takes 3-fold less number of raw holograms/measurements while also achieving a very similar color imaging performance. D-PSR can be broadly used for high-resolution holographic color imaging and microscopy applications, where wavelength-multiplexing is desired.
How to cite this article: Wu, Y. et al. Demosaiced pixel super-resolution for multiplexed holographic color imaging. Sci. Rep. 6, 28601; doi: 10.1038/srep28601 (2016).
The Ozcan Research Group at UCLA gratefully acknowledges the support of the Presidential Early Career Award for Scientists and Engineers (PECASE), the Army Research Office (ARO; W911NF-13-1-0419 and W911NF-13-1-0197), the ARO Life Sciences Division, the National Science Foundation (NSF) CBET Division Biophotonics Program, the NSF Emerging Frontiers in Research and Innovation (EFRI) Award, the NSF EAGER Award, NSF INSPIRE Award, NSF Partnerships for Innovation: Building Innovation Capacity (PFI:BIC) Program, Office of Naval Research (ONR), and the Howard Hughes Medical Institute (HHMI). This work is based upon research performed in a renovated laboratory renovated by the National Science Foundation under Grant No. 0963183, which is an award funded under the American Recovery and Reinvestment Act of 2009 (ARRA).