Example-Based Super-Resolution Fluorescence Microscopy

Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.

Dashed lines represent patch boundaries (simplified to ignore overlapping regions). In the case of (b) or (c), the entire LR image is first translated in the direction indicated by the arrow. The region that is moved out of the original image area will not be considered for reconstruction. A blank region (green bar) is then added to the translated image to compensate the patches on the boundary so that all patches have the same size. These outer patches of the LR image will be removed in the final reconstructed image, which is feasible because the patch size is insignificant compared to the entire image. After being processed by the method, the SR image will be translated back to the original position as indicated by the arrow. The final SR image is constructed by averaging over all the SR images translated at different distances. Maximal translation distance in both horizontal and vertical dimensions is 21 pixels (the size of a LR patch). The blue dots are used to illustrate relevant positional changes. Better SR reconstruction is achieved with this approach, as shown in Supplementary Figure 3.

Supplementary Figure 3. Comparison between SR results without and with the translation & average approach.
Left column, without translation; right column, with translation and average. As seen, without the process of translation and average, the method leads to inaccurate and discontinuous SR image reconstruction especially at fine structures.

Supplementary Figure 4. Quantitative evaluation of reconstruction.
The stability of the inference is quantified by calculating the standard deviation (SD) of the matches as we translated the input LR image at the single-pixel step. This SD describes the consistence of the inference around a certain pixel as its containing patch is translated. In the case when the library is built with sufficient true structures, all these patches should be able to find identical matches. Hence, around such a location, the SD will be low and the inference is thus considered to be highly reliable.
(a) The color-map of the SD using simulated data, indicating that the high-density region typically has a relatively high SD. (b-d) The color-map of the SD of Figure 3b-k, respectively. (e) Relationship between the SD and the density of structure at each pixel. The density is defined by the ratio of the area occupied by a valid structure over the entire patch area (e.g. density = 1 represents a structure covers the entire patch). Agreement is shown between the experimental and numerical data. Scale bars: 1μm.
Supplementary Figure 5. Performance of the method with varying noise levels. Structural similarity index measurement (SSIM) as a function of the number of random microtubules in the database (from 0 to 4000) with noise levels in both the database and input image (represented by signal-to-noise-ratio or SNR) at zero (cyan), 5 (yellow), 10 (gray), 20 (orange), and 100 (green), respectively. The blue curve shows the SSIM with different noise levels in the database and input image (20 and 10, respectively). It is shown that when the SNRs are the same in the database and input, as the library size increases, the improvement of the reconstruction follows a similar slope compared to the noise-free case. However, when SNRs deviate in the library and the input, the SSIM becomes worse and barely improves. Structural similarity index measurement (SSIM) as a function of the number of random microtubules in the database (from 0 to 4000) with different patch sizes at (as used in this paper: LR, 21 pixels X 21 pixels, SR, 15 pixels X 15 pixels, blue curve), (smaller patch size: LR, 5 pixels X 5 pixels, SR, 3 pixels X 3 pixels, orange curve), and (larger patch size: LR, 29 pixels X 29 pixels, SR, 23 pixels X 23 pixels, orange curve), respectively.