All-passive pixel super-resolution of time-stretch imaging

Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing.

where I(t) is the distorted low-resolution measurement of the object f (x, y) at the presence of measurement noise n(t); and g(x, y) is the intermediate image of the object illuminated by the spectrally-encoded line beam I B (x). Functions h 1 (x, y) and h 2 (t) correspond to the 2D point spread function (PSF) of the optical system and the 1D signal pre-conditioning filter of the digitizer respectively. The warp angle θ accounts for the image distortion brought by the asynchronous sampling of the time-stretch pulse train. The serialization operator S(·), representing the line-scan process, uniquely maps each spatial coordinates (x, y) to time t. The continuous 1D signal I(t) is subsequently sampled by the digitizer.
Here, we seek to restore the object f (x, y) from the timestretch measurement I(t) with the pixel-SR algorithm.
Step 1: Signal de-serialization. Prior to high-resolution image restoration, each input 1D signal I(t) [Supplementary Fig. 1(a)] is first de-serialized to form an intermediate image I(x, y), which doesn't modify the digital samples [ Supplementary Fig. 1(b)]. By re-arranging the terms in Eqs. (1)(2), the image corruption model can be represented as where h(x, y) = h 1 (x, y) * h 1 (t = C t C −1 x x); and I B (x) = h 2 (x) * I B (x). The image warp transform operator W θ [·] maps the spatial coordinates such that (x, y) → (x + y tan θ, y). In the following paragraphs, we describe the methods to estimate the image distortion parameters from the captured data [i.e. W θ (·), and I B (x)], and subsequently restore the object f (x, y) by denoised nonuniform interpolation.
Step 2: Pixel registration by image warp estimation.
The performance of pixel-SR is highly sensitive to errors in pixel registration. The initial value of the relative pixel drift δx can be obtained from the specification of the pulsed laser and the digitizer. However, its precise value can only be estimated from the captured data because the laser cavity length varies in accordance with ambient temperature and mechanical perturbation. In our approach, we achieve accurate pixel registration by optimizing background suppression. Compared to the moving object f (x, y) that contributes to varying spectral shape in the time-stretch line scans, the laser spectral shape I B (x) is highly stable from pulse to pulse, and appears as straight bands in the background of the captured raw data. Owing to the presence of pixel drift, the line scans are warped at an angle θ [ Supplementary  Fig. 1(b)]. This warping needs to be compensated for accurate extraction of the lasing spectrum, evaluated as  property can be exploited to obtain an accurate valuê θ by maximizing the "cleaniness" of the extracted foreground, i.e. by minimizing the squared residual in the foreground, expressed aŝ where the integer N is the number of pixels of each line scan. Supplementary Figure 1(c) depicts such image warp registration process. The accuracy of this pixel registration approach is fundamentally limited by the "decorrelation distance" of the laser spectrum, or alternatively named the spectral coherence of the time-stretched illumination pulse, i.e. [tan θ] < C x δλ/(M∆y).
The estimated warp angle θ is then utilized to compute the spatial coordinates of all pixels in the lowresolution line scans I(x, y), as shown in Supplementary Fig. 1(c). The registered pixel coordinates and the corresponding pixel value (x j , y j , I j ) are then indexed in the k-dimensional (K-D) tree structure 1 for efficient searching.
Step 3: Illumination background extraction. The undulated laser spectrum extracted in Eq. (4), i.e. I B (x)| θ=θ , is also aliased; it must be restored to further suppress the illumination background. This problem can be solved by interleaving the first several line scans, in which the object is absent [ Supplementary Fig. 1(d)]. A fast shiftand-add algorithm 2 is implemented to interleave the first q low-resolution time-stretch pulses into the highresolution 1D grid. The time-stretch pulses are zerofilled and shifted before adding up the signals. To enable fast pixelized operations, the relative shift of the k-th pulse (k ≤ q) is rounded to the multiple of ∆x/q. It is desirable for q to be large to achieve a higher pixel registration. However, time-interleaving of multiple pulses reduces the effective imaging line-scan rate. In other words, the effective pixel size along the slow axis (q∆y) must be smaller than the optical diffraction limit to avoid image aliasing. The optimization criteria of q is formulated as min 0<q<r/∆y

2/8
where p, q are integers; integer N is the number of pixels per line scan rounded off to the nearest integer; r is the diffraction limit of the optical microscopy system. Mathematically, this problem is equivalent to the rational number approximation, where the solution is the truncated continued fraction of f /F computed from the Euclidean algorithm 3 . The relative subpixel shift of the k-th pulse (k ≤ q) is thus determined as d k = [p(k − 1) mod q]/q × ∆x. This time-interleaving algorithm is also applied in Fig. 4(d). At the sampling rate of 3.2 GSa/s, the best approximation of f /F is (275 16 31 ), i.e. N = 276; p = −15; and q = 31. The low-resolution signals of the timestretch pulses are thus shifted and added in the following locations: The anti-aliased laser spectrum is given as 4 where comb(·) is a train of impulse functions 5 .
Step 4: Denoised non-uniform interpolation. Pixel-SR restoration of the object f (x, y) can be obtained in two stages: non-uniform interpolation and image denoising 6,7 . For higher computational efficiency, the above two stages are performed at once by utilizing the value of the denoising filter as the weights in the interpolation process [ Supplementary Fig. 1(e)]. Our objective is to restore f (x, y) from Eq. 3 by minimizing the noise n(x, y), i.e.f where W θ and H are the matrix representation of the operator W θ [·] and the convolution kernel h(x, y) respectively; and I = W −1 θ I −Î B is the dewarped, backgroundsuppressed time-stretch signal from Step 3. Solving Eq. (9) directly is not feasible for ultrafast imaging application, not only because of the sheer size of matrix H, but also of the potential noise amplification effect of the ill-conditioned problem 8 . In practice, Eq. (8) can be computed more efficiently by exploiting that fact that the kernel h(x, y) is sparse. That is, the target pixel area of the high-resolution image f (x, y) is set to be just slightly smaller than the area of the 2D convolution kernel h(x, y), such that the kernel h(x, y) at the location of i-th pixel (x i , y i ), which corresponds to the i-th column of matrix H, is only weakly correlated to that of the neighbouring pixels, i.e. |h T i h j | |h T i h i | for all j = i. This pixel size selection also achieves critical sampling of the high-resolution image f (x, y). Hence, the minimizer in Eq. (9) can now be approximated as where L is the total number of high-resolution pixels of image f (x, y); and h i is the i-th column of matrix H. For the i-th element of f at registered coordinates (x i , y i ), its pixel value iŝ where pixel value I m corresponds to the m-th nearest neighbour of the spatial coordinate (x i , y i ); and distance r m = (x i − x m ) 2 + (y i − y m ) 2 . Note that computing Eq. (13) is more efficient than computing Eq. (12) because only the pixel values I m within the effective radius σ needs to be selected, as depicted in Supplementary  Fig. 3. The K-D tree structure, generated earlier in the pixel registration step, is thus utilized for fast searching of neighboring pixels of any given coordinates (x i , y i ).
The kernel h(x, y), that also acts as a denoising filter, is currently approximated as a truncated 2D Gaussian function. The quality of the restored imagef (x, y) can be further improved by measuring the 2D point spread function (PSF) of the imaging setup and the one dimensional impulse response of the antialiasing filter in the digitizer.