Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Deep learning-based point-scanning super-resolution imaging

Abstract

Point-scanning imaging systems are among the most widely used tools for high-resolution cellular and tissue imaging, benefiting from arbitrarily defined pixel sizes. The resolution, speed, sample preservation and signal-to-noise ratio (SNR) of point-scanning systems are difficult to optimize simultaneously. We show these limitations can be mitigated via the use of deep learning-based supersampling of undersampled images acquired on a point-scanning system, which we term point-scanning super-resolution (PSSR) imaging. We designed a ‘crappifier’ that computationally degrades high SNR, high-pixel resolution ground truth images to simulate low SNR, low-resolution counterparts for training PSSR models that can restore real-world undersampled images. For high spatiotemporal resolution fluorescence time-lapse data, we developed a ‘multi-frame’ PSSR approach that uses information in adjacent frames to improve model predictions. PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed and sensitivity. All the training data, models and code for PSSR are publicly available at 3DEM.org.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Evaluation of crappifiers with different noise injection on EM data.
Fig. 2: Restoration of semisynthetic and real-world EM testing data using PSSR model trained on semisynthetically generated training pairs.
Fig. 3: PSSR model is effective for multiple EM modalities and sample types.
Fig. 4: Multi-frame PSSR time-lapses of mitochondrial dynamics.
Fig. 5: Spatiotemporal analysis of mitochondrial motility in neurons.

Similar content being viewed by others

Data availability

Example training data and pretrained models are included in the GitHub release (https://github.com/BPHO-Salk/PSSR). The entirety of our training and testing datasets and data sources are available at Texas Data Repository (https://doi.org/10.18738/T8/YLCK5A). Source data are provided with this paper.

Code availability

PSSR source code and documentation are available for download on GitHub (https://github.com/BPHO-Salk/PSSR) and are free for use under the BSD 3-Clause License.

References

  1. Wang, Z., Chen, J. & Hoi, S. C. H. Deep learning for image super-resolution: a survey. IEEE Trans. Pattern Anal. Mach. Intell. https://doi.org/10.1109/TPAMI.2020.2982166 (2019).

  2. Jain, V. et al. Supervised learning of image restoration with convolutional networks. In 2007 IEEE 11th International Conference on Computer Vision 1–8 (IEEE, 2007).

  3. Romano, Y., Isidoro, J. & Milanfar, P. RAISR: Rapid and Accurate Image Super Resolution. IEEE Trans. Comput. Imaging 3, 110–125 (2016).

    Article  Google Scholar 

  4. Shrivastava, A. et al. Learning from simulated and unsupervised images through adversarial training. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2107–2116 (IEEE, 2017).

  5. Moen, E. et al. Deep learning for cellular image analysis. Nat. Methods https://doi.org/10.1038/s41592-019-0403-1 (2019).

  6. Buchholz, T.-O., Jordan, M., Pigino, G. & Jug, F. Cryo-CARE: Content-Aware Image Restoration for Cryo-Transmission Electron Microscopy Data. In Proc. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 502–506 (IEEE, 2019).

  7. Krull, A., Buchholz, T.-O. & Jug, F. Noise2void-learning denoising from single noisy images. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2129–2137 (IEEE, 2019).

  8. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Article  CAS  Google Scholar 

  9. Batson, J. & Royer, L. Noise2self: blind denoising by self-supervision. Proc. 36th Int. Conf. Machine Learning, PMLR 97 524–533 (2019).

  10. Li, Y. et al. DLBI: deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy. Bioinformatics 34, i284–i294 (2018).

    Article  CAS  Google Scholar 

  11. Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).

    Article  CAS  Google Scholar 

  12. Nelson, A. J. & Hess, S. T. Molecular imaging with neural training of identification algorithm (neural network localization identification). Microsc. Res. Tech. 81, 966–972 (2018).

    Article  CAS  Google Scholar 

  13. Buchholz, T. O. et al. Content-aware image restoration for electron microscopy. Methods Cell. Biol. 152, 277–289 (2019).

    Article  Google Scholar 

  14. Heinrich, L., et al. Deep learning for isotropic super-resolution from non-isotropic 3D electron microscopy. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2017 135–143 (Springer, 2017).

  15. de Haan, K., Ballard, Z. S., Rivenson, Y., Wu, Y. & Ozcan, A. Resolution enhancement in scanning electron microscopy using deep learning. Sci. Rep. 9, 12050 (2019).

    Article  Google Scholar 

  16. Sreehari, S. et al. Multi-resolution data fusion for super-resolution electron microscopy. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR) Workshops 88–96 (IEEE, 2017).

  17. Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).

    Article  CAS  Google Scholar 

  18. Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Preprint at bioRxiv https://doi.org/10.1101/2020.08.27.270439 (2020).

  19. Guo, M. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, https://doi.org/10.1038/s41587-020-0560-x (2020).

  20. Kobayashi, H., Solak, A. C., Batson, J. & Royer, L. A. Image deconvolution via noise-tolerant self-supervised inversion. Preprint at https://arxiv.org/abs/2006.06156 (2020).

  21. Horstmann, H., Korber, C., Satzler, K., Aydin, D. & Kuner, T. Serial section scanning electron microscopy (S3EM) on silicon wafers for ultra-structural volume imaging of cells and tissues. PLoS ONE 7, e35172 (2012).

    Article  CAS  Google Scholar 

  22. Xu, C. S. et al. Enhanced FIB–SEM systems for large-volume 3D imaging. eLife https://doi.org/10.7554/eLife.25916 (2017).

  23. Denk, W. & Horstmann, H. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2, e329 (2004).

    Article  Google Scholar 

  24. Kuwajima, M., Mendenhall, J. M., Lindsey, L. F. & Harris, K. M. Automated transmission-mode scanning electron microscopy (tSEM) for large volume analysis at nanoscale resolution. PLoS ONE 8, e59573 (2013).

    Article  CAS  Google Scholar 

  25. Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).

    Article  CAS  Google Scholar 

  26. Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising with block-matching and 3D filtering. In Proc. SPIE 6064, Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning 606414 (2006).

  27. von Chamier, L., Laine, R. F. & Henriques, R. Artificial intelligence for microscopy: what you should know. Biochem Soc. Trans. https://doi.org/10.1042/BST20180391 (2019).

  28. Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods https://doi.org/10.1038/s41592-019-0458-z (2019).

  29. Phototoxicity revisited. Nat. Methods 15, 751 (2018).

  30. Jonkman, J. & Brown, C. M. Any way you slice it—a comparison of confocal microscopy techniques. J. Biomol. Tech. 26, 54–65 (2015).

    Article  Google Scholar 

  31. Kner, P., Chhun, B. B., Griffis, E. R., Winoto, L. & Gustafsson, M. G. Super-resolution video microscopy of live cells by structured illumination. Nat. Methods 6, 339–342 (2009).

    Article  CAS  Google Scholar 

  32. Wronski, B. et al. Handheld multi-frame super-resolution. ACM Trans. Graph. 38, 28 (2019).

    Article  Google Scholar 

  33. Huang, X. et al. Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy. Nat. Biotechnol. 36, 451–459 (2018).

    Article  CAS  Google Scholar 

  34. Carlton, P. M. et al. Fast live simultaneous multiwavelength four-dimensional optical microscopy. Proc. Natl Acad. Sci. USA 107, 16016–16022 (2010).

    Article  CAS  Google Scholar 

  35. Arigovindan, M. et al. High-resolution restoration of 3D structures from widefield images with extreme low signal-to-noise-ratio. Proc. Natl Acad. Sci. USA 110, 17344–17349 (2013).

    Article  CAS  Google Scholar 

  36. Barbastathis, G., Ozcan, A. & Situ, G. On the use of deep learning for computational imaging. Optica 6, 921–943 (2019).

    Article  Google Scholar 

  37. Keren, L. et al. MIBI-TOF: a multiplexed imaging platform relates cellular phenotypes and tissue structure. Sci. Adv. 5, eaax5851 (2019).

    Article  CAS  Google Scholar 

  38. Arrojo, E. D. R. et al. Age mosaicism across multiple scales in adult tissues. Cell Metab. 30, 343–351 e343 (2019).

    Article  Google Scholar 

  39. Wolf, S. G. & Elbaum, M. CryoSTEM tomography in biology. Methods Cell. Biol. 152, 197–215 (2019).

    Article  Google Scholar 

  40. Mi, L. et al. Learning guided electron microscopy with active acquisition. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (eds Martel, A. L. et al.) (Springer, 2020); https://doi.org/10.1007/978-3-030-59722-1_8

  41. Perez, L. & Wang, J. The effectiveness of data augmentation in image classification using deep learning. Preprint at https://arxiv.org/abs/1712.04621 (2017).

  42. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Int. Conf. Medical Image Computing and Computer-Assisted Intervention (MICCAI) 9351, 234–241 (2015).

  43. Shi, W. et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1874–1883 (IEEE, 2016).

  44. Harada, Y., Muramatsu, S. & Kiya, H. Multidimensional multirate filter without checkerboard effects. In Proc. 9th European Signal Processing Conf. (EUSIPCO 1998) 1–4 (IEEE, 1998).

  45. Sugawara, Y., Shiota, S. & Kiya, H. Super-resolution using convolutional neural networks without any checkerboard artifacts. In Proc. 25th IEEE Int. Conf. Image Processing (ICIP) 66–70 (IEEE, 2018).

  46. Aitken, A. et al. Checkerboard artifact free sub-pixel convolution: a note on sub-pixel convolution, resize convolution and convolution resize. Preprint at https://arxiv.org/abs/1707.02937 (2017).

  47. Zhang, H., Goodfellow, I., Metaxas, D. & Odena, A. Self-attention generative adversarial networks. Proc. 36th Int. Conf. Machine Learning, PMLR 97 7354–7363 (2019).

  48. Loshchilov, I. & Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. In Proc. Int. Conf. Learning Representations (ICLR) (2017).

  49. Smith, L. N. Cyclical learning rates for training neural networks. In Proc. IEEE Winter Conf. Applications of Computer Vision (WACV) 464–472 (IEEE, 2017).

  50. Smith, L. N. A disciplined approach to neural network hyper-parameters: part 1—learning rate, batch size, momentum, and weight decay. Preprint at https://arxiv.org/abs/1803.09820 (2018).

  51. Smith, L. N. & Topin, N. Super-convergence: very fast training of residual networks using large learning rates. In Proc. Artif. Intell. Mach. Learn. Multi-Domain Oper. Appl. 6 (2019).

  52. Hore, A. & Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proc. 20th Int. Conf. Pattern Recognition 2366–2369 (IEEE, 2010).

  53. Kuwajima, M., Mendenhall, J. M. & Harris, K. M. Large-volume reconstruction of brain tissue from high-resolution serial section images acquired by SEM-based scanning transmission electron microscopy. Methods Mol. Biol. 950, 253–273 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  54. Deerinck, T. J., Bushong, E. A., Thor, A. & Ellisman, M. H. NCMIR Methods for 3D EM: A New Protocol for Preparation of Biological Specimens for Serial Block Face Scanning Electron Microscopy (National Center for Microscopy and Imaging Research, 2010).

  55. Takemura, S. Y. et al. Synaptic circuits and their variations within different columns in the visual system of Drosophila. Proc. Natl Acad. Sci. USA 112, 13711–13716 (2015).

    Article  CAS  Google Scholar 

  56. Kubota, Y. et al. A carbon nanotube tape for serial-section electron microscopy of brain ultrastructure. Nat. Commun. 9, 437 (2018).

    Article  Google Scholar 

  57. Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004).

    Article  Google Scholar 

  58. Pekkurnaz, G., Trinidad, J. C., Wang, X., Kong, D. & Schwarz, T. L. Glucose regulates mitochondrial motility via Milton modification by O-GlcNAc transferase. Cell 158, 54–68 (2014).

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We thank J. Sedat, T. Sejnowski, A. Pinto-Duarte, F. Jug, M. Weigert, K. Prakash, S. Saalfeld and the entire NSF NeuroNex consortium for invaluable advice and critical feedback on our data and the manuscript. We also thank H. Hess and S. Xu for sharing their FIB–SEM data. U.M., L.F., T.Z. and S.W.N. are supported by the Waitt Foundation, core grant application no. NCI CCSG (CA014195). U.M. is a Chan-Zuckerberg Initiative Imaging Scientist and supported by NSF NeuroNex Award No. 2014862, and National Institutes of Health (NIH) grant no. R21 DC018237. C.R.S. is supported by NIH F32 GM137580. J.H. and F.M. are supported by the Wicklow AI in Medicine Research Initiative. K.H. is supported by NSF grant nos. 1707356 and NSF NeuroNex Award no. 2014862 and NIH/NIMH grant no. 2R56MH095980-06. Research in the laboratory of G.P. is supported by the Parkinson’s Foundation (PF-JFA-1888) and NIH/NIGMS grant no. R35GM128823. S.B.Y. is funded by NIH grant no. T32GM007240. Y.K. was supported by Japan Society for the Promotion of Science KAKENHI grant nos. 17H06311 and 19H03336, and by AMED grant no. JP20dm0207084. We acknowledge the Texas Advanced Computing Center at The University of Texas at Austin for providing GPU resources that have contributed to the research results reported within this paper. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the NVIDIA Quadro M5000 and NVIDIA Titan V used for this research.

Author information

Authors and Affiliations

Authors

Contributions

L.F. designed the research, performed or participated in software development, all experiments and analyses, and wrote the paper. F.M. and J.H. designed the research and, together with K.K., A.A.L., Z.L. and A.S., contributed to software development. J.M., K.H., S.W.N. and Y.K. collected EM data. C.R.S., T.Z. and M.W. collected MitoTracker data. S.Y. and G.P. collected neuronal mitochondria data. S.W.N. and L.K. performed vesicle segmentation analysis. C.R.S., S.Y. and G.P. performed neuronal mitochrondrial mobiltiy analysis. S.W.N., S.Y., G.P., A.A.L., Z.L. and A.S. contributed to data visualization. U.M., K.H. and Z.Z. contributed with resources. U.M. and K.H. secured funding. U.M. conceived and designed the research, participated in software development, all experiments and analyses, oversaw the project and wrote the paper.

Corresponding author

Correspondence to Uri Manor.

Ethics declarations

Competing interests

U.M. and L.F. have filed a patent application covering some aspects of this work (International Patent WO2020041517A9: ‘Systems and methods for enhanced imaging and analysis’, inventors U.M. and L.F., published on 1 October 2020). The rest of the authors declare no competing interests.

Additional information

Peer review information Nature Methods thanks Jie Tian and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Rita Strack was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 PSSR Neural Network architecture.

Shown is the ResNet-34 based U-Net architecture. Single-frame PSSR (PSSR-SF) and multi-frame PSSR (PSSR-MF) have 1 or 5 input channels, separately.

Extended Data Fig. 2 NanoJ-SQUIRREL error-maps of EM data.

NanoJ-SQUIRREL was used to calculate the resolution scaled error (RSE) and resolution scaled Pearson’s coefficient (RSP) for both semi-synthetic and real-world acquired low (LR), bilinear interpolated (LR-Bilinear), and PSSR (LR-PSSR) images versus ground truth high-resolution (HR) images. For these representative images from Fig. 2, the RSE and RSP images are shown along with the difference images for each output.

Extended Data Fig. 3 Comparison of PSSR vs. BM3D on EM data.

PSSR restoration was compared to the Block-matching and 3D filtering (BM3D) denoising algorithm. BM3D was applied to low-resolution real-world SEM images before (LR-BM3D-Bilinear) and after (LR-Bilinear-BM3D) bilinear upsampling. A wide range of Sigma (\(\sigma \in \left( {0,95} \right]\), with step size of 5), the key parameter that defines the assumed zero-mean white Gaussian noise in BM3D method, was thoroughly explored. Images of the same region from the LR input, bilinear upsampled, PSSR restored, and Ground truth is displayed in (a). Results of LR-BM3D-Bilinear (b, top row) and LR-Bilinear-BM3D (b, bottom row) with sigma ranging from [10, 15, …, 35] are shown. PSNR and SSIM results of LR-BM3D-Bilinear and LR-Bilinear-BM3D across the explored range of sigma are plotted in (c) and (d). Metrics for bilinear-upsampled and PSSR-restored images of the same testing set are shown as dashed lines in orange (LR-Bilinear: PSNR = 26.28 ± 0.085; SSIM = 0.767 ± 0.0031) and blue (LR-PSSR: PSNR = 27.21 ± 0.084; SSIM = 0.802 ± 0.0026). n = 12 independent images for all conditions. Values are shown as mean ± SEM.

Source data

Extended Data Fig. 4 Undersampling substantially reduces photobleaching.

U2OS cells stained with mitotracker were imaged every 2 seconds with the same laser power (2.5μW) and pixel dwell time (~1μs), but with 16x lower resolution (196 x 196 nm xy pixel size) than full resolution Airyscan acquisitions (~49 x 49 nm xy pixel size). Mean intensity plots show the relative rates of fluorescence intensity loss over time (that is photobleaching) for LR, LR-PSSR, and HR images.

Source data

Extended Data Fig. 5 Evaluation of crappifiers with different noise injection on mitotracker data.

Examples of crappified training images, visualized results and metrics (PSNR, SSIM and FRC resolution) of PSSR-SF models that were trained on high- and low-resolution pairs semi-synthetically generated by crappifiers with different noise injection were presented. a, Shown is an example of crappified training images generated by different crappifiers, including ‘No noise’ (no added noise, downsampled pixel size only), Salt & Pepper, Gaussian, Additive Gaussian, and a mixture of Salt & Pepper plus Additive Gaussian noise. High-resolution version of the same region is also included. b, Visualized restoration performance of PSSR models that used different crappifiers (No noise, Salt & Pepper, Gaussian, Additive Gaussian, and a mixture of Salt & Pepper plus Additive Gaussian noise). LR input and Ground Truth of the example testing ROI are also shown. PSNR (c), SSIM (d) and FRC (e) quantification show the PSSR model that used ‘Salt & Pepper + Additive Gaussian’ crappifier yielded the best overall performance (n = 10 independent time-lapses of fixed samples with n = 10 timepoints for all conditions). All values are shown as mean ± SEM. P values are specified in the figure for 0.0001 < p < 0.05. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001, ns = not significant; Two-sided paired t-test.

Source data

Extended Data Fig. 6 Quantitative comparison of CARE and PSSR-SF with PSSR-MF and Rolling Average (RA) methods for timelapse data.

PSNR (a) and SSIM (b) quantification show a decrease in accuracy when applying RA to LR-CARE and LR-PSSR-SF, while multi-frame PSSR provides superior performance compared to LR-PSSR-SF and CARE before and after RA processing. Data points were color-coded based on different cells. See Fig. 4c for visualized comparisons, and Supplementary Video 6 for a video comparison of the entire timelapse for CARE, LR-PSSR-SF, LR-PSSR-SF-RA, and LR-PSSR-MF. N = 5 independent timelapses with n = 30 timepoints each, achieving similar results. All values are shown as mean ± SEM. ****p < 0.0001; Two-sided paired t-test.

Source data

Extended Data Fig. 7 NanoJ-SQUIRREL error-maps of MitoTracker data.

NanoJ-SQUIRREL was used to calculate the resolution scaled error (RSE) and resolution scaled Pearson’s coefficient (RSP) for both semi-synthetic and real-world acquired low (LR), bilinear interpolated (LR-Bilinear), and PSSR (LR-PSSR) images versus ground truth high-resolution (HR) images. For these representative images from Fig. 4, the RSE and RSP images are shown along with the difference images for each output.

Extended Data Fig. 8 Compare PSSR with BM3D denoising method on mitotracker data.

PSSR restored images was compared to results of applying BM3D denoising algorithm to low-resolution real-world mitotracker images before (LR-BM3D-Bilinear) and after (LR-Bilinear-BM3D) bilinear upsampling. A wide range of Sigma (\(\sigma \in \left( {0,95} \right]\), with step size of 5) was thoroughly explored. Examples of the same region from the LR input, bilinear upsampled, PSSR-SF restored, PSSR-MF restored, and Ground truth are displayed (a, top row). Images from the top 6 results (evaluated by both PSNR and SSIM values) of LR-BM3D-Bilinear (a, middle row) and LR-Bilinear-BM3D (a, bottom row) are shown. PSNR and SSIM results of LR-BM3D-Bilinear and LR-Bilinear-BM3D across the explored range of sigma are plotted in (b) and (c). Metrics resulted from bilinearly upsampled, PSSR-SF restored and PSSR-MF restored images of the same testing set are shown as dash lines in orange (LR-Bilinear: PSNR = 24.42 ± 0.367; SSIM = 0.579 ± 0. 0287), blue (LR-PSSR-SF: PSNR = 25.72 ± 0.323; SSIM = 0.769 ± 0.0139) and green (LR-PSSR-MF: PSNR = 26.89 ± 0.322; SSIM = 0.791 ± 0.0133). As it shows, in this fluorescence mitotracker example, BM3D performs better than bilinear upsampling with carefully defined noise distribution, whereas its general performance given both PSNR and SSIM is overall worse than single-frame PSSR (LR-PSSR-SF). Excitably, our multi-frame PSSR (LR-PSSR-MF) yields the best performance. n = 10 independent timelapses of fixed samples with n = 6-10 timepoints each for all conditions. Values are shown as mean ± SEM.

Source data

Extended Data Fig. 9 NanoJ-SQUIRREL error-maps of neuronal mitochondria data.

NanoJ-SQUIRREL was used to calculate the resolution scaled error (RSE) and resolution scaled Pearson’s coefficient (RSP) for both semi-synthetic and real-world acquired low (LR), bilinear interpolated (LR-Bilinear), and PSSR (LR-PSSR) images versus ground truth high-resolution (HR) images. For these representative images from Fig. 5, the RSE and RSP images are shown along with the difference images for each output.

Extended Data Fig. 10 PSSR facilitates detection of mitochondrial motility and dynamics.

Rat hippocampal neurons expressing mito-dsRed were undersampled with a confocal detector using 170 nm pixel resolution (LR) to facilitate faster frame rates, then restored with PSSR (LR-PSSR). a, before and after timepoints of the event shown in Fig. 5 wherein two adjacent mitochondria pass one another but cannot be resolved in the original low-resolution (LR) or bilinear interpolated (LR-Bilinear) image but are clearly resolved in the LR-PSSR image. b, kymographs of a LR vs LR-PSSR timelapse that facilitates the detection of a mitochondrial fission event (yellow arrow).

Supplementary information

Supplementary Information

Supplementary Note 1 and Fig. 1.

Reporting Summary

Supplementary Video 1

Comparison of HR and LR SBFSEM 3View acquisition with 2 nm and 8 nm pixel resolutions. In the 2-nm-pixel-size image stack, high contrast enabled by relatively higher electron doses ensured HR and high SNR, which unfortunately at the same time caused severe sample damage, resulting in a failure to serially section the tissue after imaging the blockface. On the other hand, LR acquisition at 8 nm pixel size facilitated serial blockface imaging, but the compromised resolution and SNR made it impossible to uncover finer structures in the sample.

Supplementary Video 2

Image restoration achieved by a tSEM-trained PSSR model enables higher-resolution SBFSEM imaging. Shown are the lower resolution SBFSEM acquisition input (left) and the PSSR output (right).

Supplementary Video 3

Resolution restoration achieved by tSEM-trained PSSR model enables higher-resolution FIB–SEM acquisition. Shown are the lower resolution FIB–SEM acquisition input (left) and the PSSR output (right).

Supplementary Video 4

PSSR facilitates efficient 3D segmentation and reconstruction. Shown is the rendering of the 3D reconstruction of multiple biological structures using the PSSR-processed FIB–SEM stack described in Fig. 3 and Supplementary Video 3. Specifically, this reconstruction includes mitochondria (purple), endoplasmic reticulum (yellow), presynaptic vesicles (gray), the postsynaptic neuron’s plasma membrane (blue), the postsynaptic density (red) and the presynaptic neuron’s plasma membrane (green).

Supplementary Video 5

Photobleaching and cell stress due to high laser dose during HR live-cell imaging. Shown is a 10-min HR time-lapse video of a U2OS cell stained with Mitotracker Red imaged with an Airyscan microscope. The live-cell acquisition suffered from photobleaching and phototoxicity as reflected by the steadily decreasing fluorescence intensity over time as well as the swelling and fragmenting mitochondria. Imaging conditions were ~35 μW laser power, 2 s frame rate and 1.15 μm pixel size.

Supplementary Video 6

PSSR-MF reduces flicker observed in single-frame models (LR-CARE and LR-PSSR-SF) without loss of spatiotemporal resolution that occurs with rolling frame averaging (LR-CARE-RA and LR-PSSR-SF-RA). Shown is the restoration performance of LR-PSSR-SF, PSSR-SF with five-frame rolling average (LR-PSSR-SF-RA) and five-frame multi-frame PSSR (LR-PSSR-MF). Rolling average alleviated the signal flickering observed in PSSR-SF at the cost of both temporal and spatial resolution. See Extended Data Fig. 4 for more detail.

Supplementary Video 7

LR input and five-frame multi-frame PSSR (LR-PSSR). multi-frame PSSR restores resolution and SNR to Airyscan-equivalent quality with no bleaching and higher imaging speed. Shown are PSSR-MF restoration output (right, ~49 nm pixels) and its LR acquisition input (left, ~196 nm pixels). The digitally magnified region highlights a mitochondrial fission event much more easily detected in the PSSR output.

Supplementary Video 8

Comparison of HR Airyscan and LR confocal time-lapse acquisition of neuronal mitochondria. Corresponding kymographs are also displayed to illustrate the difference in temporal resolution. The Airyscan acquisition has higher spatial resolution but lower temporal resolution due to lower imaging speed, while confocal acquisition gives higher temporal resolution but lower spatial resolution.

Supplementary Video 9

Comparison of PSSR (right) versus bilinear interpolation (left). The enlarged region highlights two adjacent mitochondria passing one another in an axon, the process of which was only resolved in PSSR. Line plot shows the normalized fluorescence intensity of the indicated cross-section.

Source data

Source Data Fig. 1

Statistical source data for PSNR, SSIM and FRC resolution plots (c–e).

Source Data Fig. 2

Statistical source data for PSNR, SSIM and FRC resolution plots of both semisynthetic (b) and real-world testing data (c).

Source Data Fig. 3

Statistical source data for EM vesicle segmentation analysis plots (e).

Source Data Fig. 4

Statistical source data for flickering quantification plots (b), PSNR, SSIM and FRC resolution plots (f) and fission event counting plots (h–k).

Source Data Fig. 5

Statistical source data for PSNR, SSIM and FRC resolution plots of both semisynthetic and real-world testing data (b) and mitochondrion mobility analysis plots (f–i).

Source Data Extended Data Fig. 3

Statistical source data for PSNR and SSIM plots (c,d).

Source Data Extended Data Fig. 4

Source data for photobleaching intensity plots.

Source Data Extended Data Fig. 5

Statistical source data for PSNR, SSIM and FRC resolution plots (c–e).

Source Data Extended Data Fig. 6

Statistical source data for PSNR and SSIM plots (a,b).

Source Data Extended Data Fig. 8

Statistical source data for PSNR and SSIM plots (c,d).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fang, L., Monroe, F., Novak, S.W. et al. Deep learning-based point-scanning super-resolution imaging. Nat Methods 18, 406–416 (2021). https://doi.org/10.1038/s41592-021-01080-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-021-01080-z

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing