Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Multiview confocal super-resolution microscopy

Abstract

Confocal microscopy1 remains a major workhorse in biomedical optical microscopy owing to its reliability and flexibility in imaging various samples, but suffers from substantial point spread function anisotropy, diffraction-limited resolution, depth-dependent degradation in scattering samples and volumetric bleaching2. Here we address these problems, enhancing confocal microscopy performance from the sub-micrometre to millimetre spatial scale and the millisecond to hour temporal scale, improving both lateral and axial resolution more than twofold while simultaneously reducing phototoxicity. We achieve these gains using an integrated, four-pronged approach: (1) developing compact line scanners that enable sensitive, rapid, diffraction-limited imaging over large areas; (2) combining line-scanning with multiview imaging, developing reconstruction algorithms that improve resolution isotropy and recover signal otherwise lost to scattering; (3) adapting techniques from structured illumination microscopy, achieving super-resolution imaging in densely labelled, thick samples; (4) synergizing deep learning with these advances, further improving imaging speed, resolution and duration. We demonstrate these capabilities on more than 20 distinct fixed and live samples, including protein distributions in single cells; nuclei and developing neurons in Caenorhabditis elegans embryos, larvae and adults; myoblasts in imaginal disks of Drosophila wings; and mouse renal, oesophageal, cardiac and brain tissues.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Access options

Buy article

Get time limited or full article access on ReadCube.

$32.00

All prices are NET prices.

Fig. 1: Multiview line confocal microscopy.
Fig. 2: Multiview confocal live imaging.
Fig. 3: Multiview super-resolution microscopy.
Fig. 4: Deep learning enhances multiview super-resolution imaging.

Data availability

The data that support the findings of this study are included in Extended Data Figs. 1–18 and Supplementary Videos 1–8, and some representative source data for the figures (Figs. 1c, g, 2a, h, 3b, 4b, i) are publicly available at https://zenodo.org/record/5495955#.YVItPTHMJaS. Other datasets are available from the corresponding author upon reasonable request. Source data are provided with this paper.

Code availability

The custom codes used in this study are available upon request, with most software and test data publicly available at https://github.com/hroi-aim/multiviewSR and https://github.com/AiviaCommunity/3D-RCAN.

References

  1. Pawley, J. B. (ed.) Handbook of Biological Confocal Microscopy 3rd edn (Springer, 2006).

  2. Laissue, P. P., Alghamdi, R. A., Tomancak, P., Reynaud, E. G., Shroff, H. Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).

    Article  CAS  PubMed  Google Scholar 

  3. Baumgart, E. & Kubitscheck, U. Scanned light sheet microscopy with confocal slit detection. Opt. Express 20, 21805–21814 (2012).

    Article  ADS  PubMed  Google Scholar 

  4. Kumar, A. et al. Using stage- and slit-scanning to improve contrast and optical sectioning in dual-view inverted light-sheet microscopy (diSPIM). Biol. Bull. 231, 26–39 (2016).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  5. Guo, M. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, 1337–1346 (2020).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  6. Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745–754 (1974).

    Article  ADS  Google Scholar 

  7. Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55–59 (1972).

    Article  ADS  Google Scholar 

  8. Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019).

    Article  CAS  PubMed  Google Scholar 

  9. Chen, F., Tillberg, P. & Boyden, E. S. Expansion microscopy. Science 347, 543–548 (2015).

    Article  CAS  ADS  PubMed Central  PubMed  Google Scholar 

  10. He, K., Gkioxari, G., Dollár, P. & Girshick, R. Mask R-CNN. In 2017 IEEE Conf. Computer Vision (ICCV) (eds Ikeuchi, K. et al.) 2980–2988 (2017).

  11. Lin, T.-Y. et al. Microsoft COCO: common objects in context. In Computer Vision – CCV 2014 (eds Fleet, D. et al.) 740–755 (Springer, 2014).

  12. Kosmach, A. et al. Monitoring mitochondrial calcium and metabolism in the beating MCU-KO heart. Cell Rep. 37, 109846 (2021).

  13. Wu, Y. et al. Inverted selective plane illumination microscopy (iSPIM) enables coupled cell identity lineaging and neurodevelopmental imaging in Caenorhabditis elegans. Proc. Natl Acad. Sci. USA 108, 17708–17713 (2011).

    Article  CAS  ADS  PubMed Central  PubMed  Google Scholar 

  14. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Article  CAS  PubMed  Google Scholar 

  15. Sulston, J. E., Schierenberg, E., White, J. G. & Thomson, J. N. The embryonic cell lineage of the nematode Caenorhabditis elegans. Dev. Biol. 100, 64–119 (1983).

    Article  CAS  PubMed  Google Scholar 

  16. Wu, Y. et al. Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy. Nat. Biotechnol. 31, 1032–1038 (2013).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  17. Kumar, A. et al. Dual-view plane illumination microscopy for rapid and spatially isotropic imaging. Nat. Protoc. 9, 2555–2573 (2014).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  18. Duncan, L. H. et al. Isotropic light-sheet microscopy and automated cell lineage analyses to catalogue Caenorhabditis elegans embryogenesis with subcellular resolution. J. Vis. Exp. 148, e59533 (2019).

    Google Scholar 

  19. Towlson, E. K., Vértes, P. E., Ahnert, S. E., Schafer, W. R. & Bullmore, E. T. The rich club of the C. elegans neuronal connectome. J. Neurosci. 33, 6380–6387 (2013).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  20. White, J. G., Southgate, E., Thomson, J. N. & Brenner, S. The structure of the nervous system of the nematode Caenorhabditis elegans. Phil. Trans. R. Soc. B 314, 1–340 (1986).

    CAS  PubMed  ADS  Google Scholar 

  21. Armenti, S. T., Lohmer, L. L., Sherwood, D. R. & Nance, J. Repurposing an endogenous degradation system for rapid and targeted depletion of C. elegans proteins. Development 141, 4640–4647 (2014).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  22. Wu, Y. & Shroff, H. Faster, sharper, and deeper: structured illumination microscopy for biological imaging. Nat. Methods 15, 1011–1019 (2018); correction 16, 205 (2019).

    Article  CAS  PubMed  Google Scholar 

  23. Fischer, R. S., Gardel, M. L., Ma, X., Adelstein, R. S. & Waterman, C. M. Local cortical tension by myosin II guides 3D endothelial cell branching. Curr Biol. 19, 260–265 (2009).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  24. York, A. G. et al. Instant super-resolution imaging in live cells and embryos via analog image processing. Nat. Methods 10, 1122–1126 (2013).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  25. Gambarotto, D. et al. Imaging cellular ultrastructures using expansion microscopy (U-ExM). Nat. Methods 16, 71–74 (2019).

    Article  CAS  PubMed  Google Scholar 

  26. Tabara, H., Motohashi, T. & Kohara, Y. A multi-well version of in situ hybridization on whole mount embryos of Caenorhabditis elegans. Nucleic Acids Res. 24, 2119–2124 (1996).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  27. Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2020).

    Article  Google Scholar 

  28. Wu, Y. et al. Simultaneous multiview capture and fusion improves spatial resolution in wide-field and light-sheet microscopy. Optica 3, 897–910 (2016).

    Article  CAS  ADS  PubMed Central  PubMed  Google Scholar 

  29. Barth, R., Bystricky, K. & Shaban, H. A. Coupling chromatin structure and dynamics by live super-resolution imaging. Sci. Adv. https://doi.org/10.1126/sciadv.aaz2196 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Han, X. et al. A polymer index-matched to water enables diverse applications in fluorescence microscopy. Lab Chip 21, 1549–1562 (2021).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  31. Chen, B.-C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).

    Article  PubMed Central  PubMed  Google Scholar 

  32. Gustafsson, M. G. L. et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys. J. 94, 4957–4970 (2008).

    Article  CAS  ADS  PubMed Central  PubMed  Google Scholar 

  33. Rego, E. H. et al. Nonlinear structured-illumination microscopy with a photoswitchable protein reveals cellular structures at 50-nm resolution. Proc. Natl Acad. Sci. USA 109, E135–E143 (2011).

    Article  PubMed Central  PubMed  Google Scholar 

  34. Krüger, J.-R., Keller-Findeisen, J., Geisler, C. & Egner, A. Tomographic STED microscopy. Biomed. Opt. Express 11, 3139–3163 (2020).

    Article  PubMed Central  PubMed  Google Scholar 

  35. Wu, Y. et al. Reflective imaging improves spatiotemporal resolution and collection efficiency in light sheet microscopy. Nat. Commun. 8, 1452 (2017).

    Article  ADS  PubMed Central  PubMed  Google Scholar 

  36. Shroff, H., York, A., Giannini, J. P. & Kumar, A. Resolution enhancement for line scanning excitation microscopy systems and methods. US patent 10,247,930 (2019).

  37. Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).

    Article  CAS  PubMed  Google Scholar 

  38. Ji, N. Adaptive optical fluorescence microscopy. Nat. Methods 14, 374–380 (2017).

    Article  CAS  PubMed  Google Scholar 

  39. Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in live organisms. Nat. Biotechnol. 34, 1267–1278 (2016).

    Article  CAS  PubMed  Google Scholar 

  40. Liu, T.-L. et al. Observing the cell in its native state: imaging subcellular dynamics in multicellular organisms. Science 360, eaaq1392 (2018).

    Article  ADS  PubMed Central  PubMed  Google Scholar 

  41. Zheng, W. et al. Adaptive optics improves multiphoton super-resolution imaging. Nat. Methods 14, 869–872 (2017).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

Download references

Acknowledgements

We thank C. Liu and F. Zhang from the NHLBI Transgenic Core for making the TOMM20–mNeonGreen transgenic mouse line; E. Jorgensen, C. Frøkjær-Jensen and M. Rich for sharing the EG6994 C. elegans strain; L. Samelson for the gift of the Jurkat T cells; G. Patterson for the H2B–GFP plasmid; J. Hammer for the F-tractin–tdTomato plasmid; N. Koonce and L. Shao for sharing images of larvae imaged with spinning-disk microscopy and iSIM; SVision LLC for maintaining and updating the 3D RCAN GitHub site; X. Li for assistance with imaging samples on the OMX 3D SIM; E. Tyler and A. Hoofring (NIH Medical Arts) for help with figure preparation; R. Leapman, H. Eden, S. Parekh and M. Guo for feedback on the manuscript; Q. Dai for supporting X.H.’s visit to H.S.’s lab; and C. Waterman for supporting R.F’s participation in this project. We thank the Research Center for Minority Institutions programme, the Marine Biological Laboratories (MBL) and the Instituto de Neurobiología de la Universidad de Puerto Rico for providing meeting and brainstorming platforms. H.S., P.L.R. and D.C.-R. acknowledge the Whitman and Fellows programme at MBL for providing funding and space for discussions valuable to this work. Research in the D.C.-R. lab was supported by NIH grant no. R24-OD016474, NIH R01NS076558, DP1NS111778 and by an HHMI Scholar Award. X.H. was supported by an international exchange fellowship from the Chinese Scholar Council. This research was supported by the intramural research programmes of the National Institute of Biomedical Imaging and Bioengineering; National Institute of Heart, Lung, and Blood; Eunice Kennedy Shriver National Institute of Child Health and Human Development; and the National Cancer Institute within the National Institutes of Health. C.S. acknowledges funding from the National Institute of General Medical Sciences of the NIH under award number R25GM109439 (project title: University of Chicago Initiative for Maximizing Student Development (IMSD)) and NIBIB under grant number T32 EB002103. Y.P. and Y. Sun are supported by the Center for Cancer Research, the Intramural Program of the National Cancer Institute, NIH (Z01-BC 006150). This research is funded in part by the Gordon and Betty Moore Foundation. A.U. acknowledges support from NIH R01 GM131054. S.R. acknowledges support from NIH R35GM124878. This work utilized the computational resources of the NIH HPC Biowulf cluster (http://hpc.nih.gov), and we also thank the Office of Data Science Strategy, NIH, for providing a seed grant enabling us to train deep-learning models using cloud-based computational resources. The NIH and its staff do not recommend or endorse any company, product or service.

Author information

Authors and Affiliations

Authors

Contributions

Conceived idea: Y.W., H.S. Designed and assembled line scanners: M.G., J.S.D. Tested line scanners: Y.W., X.H., M.G., J.S.D., H.S. Designed optical set-up: Y.W., X.H., H.S. Built optical set-up: Y.W., X.H. Designed reconstruction algorithms: Y.W., C.S., P.L.R., H.S. Wrote software: Y.W., X.H., J.L., C.S. Developed deep-learning nuclear segmentation pipeline: J.L. Developed cloud and local pipelines for deep learning: Y.W., J.L., J.C., H.S. Designed experiments: Y.W., X.H. Y. Su, T.S., I.R.-S., R.F., A.P., S.R., A.U., D.C.-R., H.S. Performed experiments: Y.W., X.H., Y. Su, T.S., I.R.-S., C.C., R.F., A.P., X.W. Prepared samples: Y.W., X.H., Y. Su, T.S., I.R.-S., R.F., A.P., C.C., J.S., R.C. Developed expansion microscopy procedures: Y. Su. Provided reagents or equipment: C.C., X.W., L.B., Y. Sun, L.H.D., Y.P., Y.-B.S. All authors analysed data. Wrote paper: Y.W., X.H., Y. Su, H.S., with input from all authors. Supervised research: Y.W., J.D., Y.P., Y.-B.S., E.M., S.R., A.U., D.C.-R., P.L.R., H.S. Directed research: H.S.

Corresponding author

Correspondence to Yicong Wu.

Ethics declarations

Competing interests

Y.W., X.H., P.L.R. and H.S. have filed invention disclosures covering aspects of this work (US patent application no. 63/001,672 and PCT application no. WO2021/202316). M.G. and J.S.D. are employees of Applied Scientific Instrumentation, which manufactures the line scanning units used in this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended Data Fig. 1 Instrument overview.

a) Photograph of instrument, highlighting objectives, cameras, sample area. b) CAD rendering of MEMS line scanner. c) Optical set-up. Diode lasers are combined and passed through an acousto-optic tunable filter (AOTF) for shuttering and power control, before being directed into broadband single-mode fibres. Galvanometric mirrors are used to direct beams into each fibre, and to adjust the power of the beams entering the fibre. Each fibre is then fed into a MEMS line scanner (purple dot-dashed line, here only one shown for clarity), with optics as indicated. The scanner serves to collimate the fibre output, focus it with a cylindrical lens, scan it with a MEMS mirror, and image the scanned output to a field stop at a conjugate sample plane. The beam is then relayed via lens L3 and the objective to the sample plane, after reflection from a dichroic mirror. Fluorescence is collected in epi-mode, transmitted through a dichroic mirror, and imaged via tube lens L4 onto a scientific CMOS camera where it is synchronized to the rolling shutter readout. d) Control waveforms issued to camera, MEMS scanner, objective piezo, and AOTF, used to acquire volumetric data. See also Supplementary Methods. e) Example line illumination at various lateral positions on camera chip, as imaged with fluorescent dye in lower view C. Excitation PSF measurements are taken at various positions in the field (see also Extended Data Fig. 2a, c). Scale bar: 50 μm. f) Example images acquired in C. elegans embryo expressing GFP-histones, as visualized in bottom view C with widefield mode (top) and line-scanning mode (slit width 0.58 μm, bottom). Scale bar: 5 μm.

Extended Data Fig. 2 Further characterization of imaging field.

a) 175 × 175 μm2 imaging field, showing representative horizontal illumination lines across field, as measured in fluorescent dye. Also superimposed are example regions of interest (#1–#9) from a separate experiment, showing images of 100 nm fluorescent beads at different field locations. Scale bar: 20 μm. b) Excitation uniformity measured along the long axis of illumination, from lines at top, middle, and bottom of imaging field, as marked in a). c) Full width at half maximum (FWHM) of short axis of illumination line, as measured at regions in a). FWHMs are estimated by scanning illumination relative to bead and recording intensity as a function of scan position. Means and standard deviations derived from 15 beads are reported at each location. d) As in c), but now reporting x, y, and z FWHMs derived from images of individual beads (n = 15).

Extended Data Fig. 3Resolution enhancement with triple-view confocal imaging.

a) Lateral maximum intensity projection image of fixed U2OS cells immunolabelled with mouse-anti-alpha tubulin, anti-mouse-biotin, streptavidin Alexa Fluor 488, marking microtubules (fused result after registration and deconvolution of all three raw views). Height: colour bar. b) Raw and fused axial views along the dashed line in a). Fourier transforms of axial view are shown in last row, showing resolution enhancement after fusion. Orange oval: 1/260 nm−1 in half width and 1/405 nm−1 in half height. c) Triple-view reconstruction of whole fixed L1 stage larval worm, nuclei labelled with NucSpot 488. Two maximum intensity projections are shown, taken after rotating volume 220 degrees (top) and 300 degrees (bottom) about the x axis. d) Higher magnification lateral view 10 μm from the beginning of the volume, corresponding to the yellow dashed region in c). e) Higher magnification axial views (single planes) corresponding to the red rectangular region in c), comparing single-view deconvolved result (left) to triple-view result (right). Magenta arrows highlight subnuclear structure better resolved in triple-view result. f) Line profiles corresponding to e1, e2 in e). Scale bars: a, c) 10 µm; b, e) 3 µm.

Extended Data Fig. 4 Contrast and resolution is enhanced after multiview fusion or SIM.

a) Nanoscale imaging of organelles (mitochondrial outer membrane, cyan; and double stranded DNA, magenta) in expanded U2OS cells, to accompany Fig. 1f. Top: images, bottom: line profiles corresponding to a1–a3 at top. b) Neurons marked with membrane targeted GFP in living C. elegans embryos, to accompany Fig. 2i. Top: images; bottom: line profiles corresponding to b1–b4 at top, n = 5 profiles at indicated positions were used to generate means ± standard deviations for b1, b2. c) Stained actin in fixed B16F10 mouse melanoma cells embedded in collagen gels, to accompany Fig. 3c. Left: images; right: line profiles corresponding to c1–c4 at left. d) Neurons in fixed, expanded C. elegans embryos, to accompany Fig. 3e, f. Top: images; bottom: line profiles corresponding to d1–d6 at left, n = 7 profiles at indicated positions were used to generate means ± standard deviations for d1–d3. e) histone H2B puncta in live Jurkat T cells, to accompany Fig. 4c. Top: images; bottom: line profiles corresponding to e1–e3 at top, n = 10 profiles at indicated positions were used to generate means ± standard deviations for e1–e3. Scale bars: a, d) 2 µm, b, c, e) 5 µm.

Extended Data Fig. 5 Triple-view comparisons in adult C. elegans.

a) Axial views of whole fixed worm labelled with NucSpot Live 488, comparing raw views gathered by objectives A, B, C; triple-view reconstruction; and point-scanning confocal microscope. Arrows highlight example region with uniformly good quality in triple-view reconstruction, but showing attenuation either at bottom (Views A, B) or top (View C, point-scanning confocal) of stack with other methods. See also Fig. 2g–j. b) Average attenuation measured through axial extent of the worm, as measured by raw views A, B C (top graph) and triple-view reconstruction and point-scanning confocal (bottom graph). Exponential fits to the data are also shown with dashed lines. See also Supplementary Methods. cf) Comparative higher magnification views of dashed red rectangular region in a), with bottom deconvolved view c), commercial Leica SP8 confocal microscope e), conventional triple-view deconvolution d), attenuation-compensated triple-view deconvolution f). Coloured arrows highlight comparisons, orange: single-view versus triple-view, magenta: deconvolution methods. Scale bar: a) 50 μm, cf) 10 μm.

Extended Data Fig. 6 Triple-view comparisons in scattering tissue.

a) Schematic of kidney: approximate region where tissue was extracted. b) Four colour triple-view reconstruction of mouse kidney slice. Lateral image at 4-µm depth, highlighting glomerulus surrounded by convoluted tubules. Red: nuclei stained with DAPI, green: actin stained with phalloidin-Alexa Fluor 488; magenta: tubulin immunolabelled with mouse-α-Tubulin primary, α-Mouse-JF549 secondary; yellow: CD31 immunolabelled with Goat-α-CD31 primary, α-Goat AF647 secondary. Scale bar: 20 μm. c) Comparative higher magnification of white rectangular region in b) at 28-µm depth. Scale bar: 20 μm. d) Comparative axial view along dashed line in b). White arrows: structures that are dim in single-view but restored in triple-view. Scale bar: 20 μm. e) Triple-view reconstruction of mouse cardiac tissue (~536 × 536 × 37 μm3 section, shown is a maximum intensity projection of axial slices from 10–30 μm). Magenta: Atto 647 NHS, nonspecifically marking proteins, cyan: NucSpot Live 488, marking nuclei. Scale bar: 50 μm. f) Higher-magnification view of orange rectangular region in e), comparing raw View C (top) to triple-view reconstruction (bottom). Scale bar: 20 μm. g) Labels as in e) but showing triple-view reconstruction of mouse brain tissue (~536 × 536 × 25 μm3 section, shown is a maximum intensity projection of axial slices from 5–15 μm). Scale bar: 50 μm. h) Axial maximum intensity projection computed over 120 μm vertical extent of white rectangular region shown in g), comparing raw View C (top) to triple-view (bottom) result. Scale bar: 10 μm. i) Mouse cardiac tissue stained with Atto 647 NHS (same sample as e), nonspecifically marking proteins, as observed in lower view C (left), after triple-view deconvolution (middle), and triple-view deconvolution after flat-fielding (right). Scale bar: 20 μm. j) Intensity profiles from i), created by averaging across vertical axis.

Extended Data Fig. 7 Triple-view comparisons in Drosophila wing imaginal disks.

a) Schematic of larval wing disc, lateral (top) and axial (bottom) views, including adult muscle precursor myoblasts and notum. b) Lateral plane from triple-view reconstruction, 30 μm from sample surface. Notum nuclei (NLS-mCherry, magenta) and myoblast membranes (CD2-GFP, cyan) labelled. c) Axial maximum intensity projection derived from 6-μm-thick yellow rectangle in b). d) Higher magnification view of white dashed line/rectangle in b, c), comparing single view C deconvolution (left) to triple-view result (right). White arrows: membrane observed in triple-view but absent in single-view. e) Line profiles corresponding to d1, d2 in d). f) Lateral (left) and axial (right) maximum intensity projections of the larval wing disc. g) As in f) but showing another larval wing disc imaged with a spinning-disk confocal microscope with NA = 1.3. Scale bars: b, c) 20 μm, d) 5 μm, f, g) 15 μm.

Extended Data Fig. 8 Live triple-view confocal imaging of Cardiomyocytes.

a) Cardiomyocytes expressing EGFP-Tomm20, labelled with MitoTracker Red CMXRos, imaged with triple-view confocal microscopy in 2.03 s, every 20 s, for 100 time points. Lateral (top) and axial (bottom) maximum intensity projections shown at indicated times before and after registration/deconvolution. Scale bar: 10 μm. b Higher-magnification view of orange rectangular region in a), highlighting mitochondrial fluctuations (red arrows) over time. Scale bar: 3 μm. See also Supplementary Video 3.

Extended Data Fig. 9 Neural network schematics.

a) Workflow for two-step deep learning procedure used in Fig. 2e–i. Denoising neural networks are trained for views A, B, C, by using matched high and low SNR volumes derived from embryos paralysed with sodium azide. The denoising output of each model are combined with joint deconvolution (triple-view decon) and are combined with the denoised view C output to train a second neural work (Decon model). In this manner, noisy raw data from View C can be transformed into a high SNR, high resolution prediction. b) Example data (lateral slice through GFP-histone expressing C. elegans embryo) from left to right: raw View C data; single neural-network prediction (raw input View C to high SNR, triple-view deconvolved result); two-step denoised and deconvolved prediction; high SNR denoised triple-view deconvolved result (i.e., ground truth used in training the second neural network in a). The two-step prediction is noticeably closer to the ground truth (red arrows highlight regions for comparison; 3D SSIM 0.89 ± 0.05, PSNR 43.4 ± 2.2 (mean ± standard deviation, n = 7 embryos) than the single network (3D SSIM 0.72 ± 0.17, PSNR 27.8 ± 6.4, n = 7). Scale bar: 10 μm. See also Fig. 2e. c) Example raw input (left), output of single neural network (middle), and two-step output (right) for AIB neuron. Axial views are shown, orange arrows that fine features are better preserved in two-step rather than one-step neural network. Scale bar: 5 μm. See also Fig. 2h, i. d) Training and validation loss (left) and error (right) as a function of epoch number, for the second step in the neural network, i.e., denoised view C input and triple-view deconvolved output in c). MSE: mean square error. MAE: mean absolute error. e) Mask RCNN used for segmenting nuclear data in Figs. 1j, 2e. Key components of the Mask RCNN include a backbone network, region proposal network (RPN), object classification module, bounding box regression module, and mask segmentation module. The backbone network is a convolutional neural network that extracts features from the input image. The RPN scans the feature map to detect possible candidate areas that may contain objects (nuclei). For each bounding box containing an object, the object classification module (containing fully connected (FC) layers) classifies objects into specific object class(es) or a background class. The bounding box regression module refines the location of the box to better contain the object. Finally, the mask segmentation module takes the foreground regions selected by the object classification module and generates segmentation masks. f) Post processing after Mask-RCNN. Nuclei (here four are shown) are often connected and need to be split. We apply the watershed algorithm to split the nuclei based on a distance transform. g) Number of nuclei segmented by two-step deep learning, raw single-view light-sheet imaging, and single-view light-sheet imaging passed through DenseDeconNet, a neural network designed to improve resolution isotropy. Means and standard deviations shown from 16 embryos. See also Fig. 2f. h) Higher magnification of neurites in Fig. 2i. Neurites straightened using ImageJ; corresponding neurite tip regions are indicated by red arrows in Fig. 2i. White arrowheads: varicosities evident in deep learning prediction (top) but obscured in the diSPIM data. Yellow dashed lines outline neurite. Scale bar: 2 μm.

Extended Data Fig. 10 Triple-view 1D SIM methods.

a) Workflow for a single scanning direction and extension for multiview 1D SIM. Five confocal images are acquired per plane, each with illumination structure shifted 2π/5 in phase relative to the previous position. Averaging produces diffraction-limited images. Detecting each illumination maximum and reassigning the fluorescence signal around it (photon reassignment) improves spatial resolution in the direction of the line scan. Combining image volumes acquired from multiple views further improves volumetric resolution. Images are of immunolabelled microtubules in fixed U2OS cells and corresponding Fourier transforms. Scale bar: 2 μm. b) Simulated fluorescence patterns elicited by (top to bottom) 3, 4, 5, 6 phase illumination. Modulation contrast for each pattern is also indicated. Scale bar: 1 μm. See also Supplementary Methods. c) Raw data as excited with 3, 4, 5, 6 phase illumination. d) Corresponding deconvolved result after photon-reassignment. Scale bars in c, d): 3 μm. e) Fourier transforms corresponding to d). Vertical/horizontal resolution improvement corresponding to red ellipse also indicated. f) Line profiles for 3, 4, 5, 6 phase illumination, corresponding to dotted red line in b). g) As in f) but corresponding to dotted line in c).

Extended Data Fig. 11 Triple-view 1D SIM of C. elegans embryo.

a) Fixed C. elegans embryo (strain DCR6681) with tubulin immunolabelled with α-alpha tubulin primary, α-mouse-biotin, Streptavidin Alexa Fluor 568, imaged via lower View C (left) and triple-view 1D SIM mode (right). Lateral (left) and axial (right) maximum intensity projections are shown in each case. Scale bars: 5 μm. Higher magnification lateral b) views and axial c) views of yellow and red rectangular regions in a) are also indicated, highlighting progressive improvement in resolution from view C (left), to triple-view diffraction-limited result (middle), to triple-view 1D SIM result (right). Scale bars: 2 μm.

Extended Data Fig. 12 Triple-view 1D SIM of cells.

a, b) Comparative triple-view SIM a) and instant SIM b) maximum intensity projections of the same fixed HEY-T30 cell embedded in collagen gel, labelled with MitoTracker Red CMXRos and Alexa Fluor 488 phalloidin. White and magenta arrows: the same features in lateral (left) and axial (right) projections. Insets are higher magnification axial views of dashed rectangular regions. c) Comparative raw single view C (left of dashed orange line) and triple-view SIM (right of dashed line) maximum intensity projections of fixed and 4× expanded U2OS cell, immunolabelled with rabbit anti-Tomm20 primary, anti-rabbit-biotin, streptavidin Alexa Fluor 488 marking mitochondria (cyan) and mouse anti-alpha tubulin primary, anti-mouse JF 549, marking tubulin. Lateral (top) and axial (bottom) views are shown. d) Higher magnification views of white dashed region in c), comparing triple-view (left) and triple-view SIM (right) reconstructions. Bottom: sector-based decorrelation analysis to estimate spatial resolution in images at top. Resolution values for horizontal and vertical directions are indicated. Scale bars: ac) 5 μm, d) 1 μm.

Extended Data Fig. 13 Expansion workflow and imaging results for C. elegans embryos, related to Fig. 3d.

a) Immobilization, permeabilization, fixation, immunostaining, and expansion takes approximately four days. b) Overlay of embryo stained with DAPI before and after expansion after 12-degree affine registration (cyan: pre-expansion; magenta: post-expansion). The embryo was imaged with diSPIM. Single-view maximum intensity projection along xy and xz views are shown for comparison. The registration results from 7 embryos (normalized cross-correlation: 0.72 ± 0.03) show that the expansion factor for whole embryos is 3.29 ± 0.14, with nearly isotropic expansion (3.24 ± 0.14, 3.39 ± 0.14, 3.26 ± 0.17 for x, y and z dimensions, respectively). Scale bar: 5 µm in pre-expansion units. c) Magnified view of the rectangle shown in b), showing no obvious local distortions in nuclei shape during expansion. Scale bar: 2 µm in pre-expansion units.

Extended Data Fig. 14 Simulation of the deep-learning method for isotropic in-plane super-resolution imaging.

a) An object (left column) consisting of lines and hollow spheres can be blurred to resemble diffraction-limited confocal input (middle column). Fourier transform of the raw input is shown (right column). b) Deep-learning 1D super-resolution output with no rotation (left column); deep learning output after rotating input by 30 degrees (middle column); deep learning output after rotating input by 60 degrees (right column). Fourier transforms in bottom row confirm 1D resolution enhancement regardless of rotation angle. c) Deep learning output after rotating input by 90 degrees (left column); outputs after deep learning and joint deconvolution of two orientations (middle column), or six orientations (right column) show progressive improvement in resolution isotropy (red arrows), confirmed with Fourier transforms of the images (bottom row). Ellipses bound decorrelation estimates of resolution (numerical values from ellipse boundary indicated in red text). Scale bar: 2 μm.

Extended Data Fig. 15 Isotropic in-plane super-resolution imaging of fixed cells.

a) Immunolabelled microtubules in fixed U2OS cells, from the same sample shown in Fig. 4b. Shown are raw input (view C); 1D SIM output after physics-based reconstruction with five images; deep learning 1D super-resolution output after rotation at 0 degrees; deep learning 1D super-resolution output after rotation at 120 degrees and rotation back to the original frame. Scale bar: 10 μm. Accompanying Fourier transforms are shown in bottom row. b) Higher magnification view of red dashed region in a), comparing raw confocal input, 1D SIM output after deep learning and deconvolution, and 2D SIM output after deep learning and joint deconvolution after six rotations. Arrows highlight regions for comparison. c) Profiles along red line in b), comparing the resolution of two filaments in 2D SIM mode (red), 1D SIM mode (blue), and confocal input (black).

Extended Data Fig. 16 Isotropic in-plane super-resolution imaging of living cells.

a) Lateral maximum intensity projections of Jurkat T cell expressing EMTB-3XGFP (yellow) and F-tractin-tdTomato (red), volumetrically imaged every 2 s, as imaged with raw line confocal input (view C, left) and 2D SIM output after deep learning and joint deconvolution with six rotations. Four time points from 150 volume series are shown; green arrows highlight features better resolved in 2D SIM output vs. raw input; white arrows indicate actin dynamics at cell periphery. Scale bar: 5 μm. See also Supplementary Video 6. b) Imaris 3D renderings of Jurkat T cell (same sample as shown in Fig. 4c) with nucleus (histone H2B-GFP, cyan, top) and segmented centrosome (EMTB-mCherry, magenta, bottom) at indicated time points. The centrosome produces a concave nuclear deformation (red arrows), pulling and rotating the nucleus as it becomes docked at the immune synapse. Scale bar: 5 μm. c) Maximum intensity projections over bottom half of EMTB-mCherry volumes at indicated times, showing coordinated movement of the microtubule cytoskeleton. Coloured arrowheads mark regions for comparison. See also Supplementary Video 6. Scale bar: 5 μm.

Extended Data Fig. 17 Multi-modality imaging enabled with the multiview line confocal system.

a) Different methods of combining data, enabling a highly versatile imaging platform. Left: Diffraction limited volumes acquired from views A (yellow), B (green), C (red) may be combined with joint deconvolution to yield triple-view diffraction-limited data (RYG arrows). Middle: Alternatively, 5 volumes per view may be collected and processed as in Extended Data Fig. 10 for 1D SIM, and the 3 1D SIM volumes combined using joint deconvolution to reconstruct triple-view 1D SIM data. Right: Instead, confocal data from each view may be passed through 1D SIM networks, and the data combined via joint deconvolution. Combining 6 rotations (for clarity only two are shown in figure) from view C yields view C 2D SIM data. If the procedure is repeated for views A, B, and the data combined with joint deconvolution, triple-view 2D SIM data may be obtained. xz cross-sections through PSFs are shown in black boxes. Blue volumes are relative sizes of PSFs that result from each process. Scale bars: white: 500 nm, black: 200 nm. b) Applications include wide-field microscopy, single-view line confocal microscopy (from any of the views), single-view 1D SIM, triple-view diffraction limited imaging, and triple-view 1D SIM. With deep learning (red), triple-view line confocal volumes can be predicted from low SNR single-view input, 1D SIM can be predicted from diffraction-limited input, and combination with joint deconvolution allows further extension to single- and triple-view 2D SIM. Biological and imaging performance examples (resolution, imaging speed and duration) are also provided. Resolution values in line confocal microscopy, triple-view line confocal (without deep learning), and triple-view 1D-SIM (without deep learning) are estimated from immunostained microtubules in fixed U2OS cells. Deep learning resolution values are estimated from fine C. elegans embryo neurites (triple-view confocal) or actin fibres (single view 2D SIM, triple-view 2D SIM). See also Supplementary Table 1. c) Decorrelation resolution analysis from the images of worm L4 larval (strain DCR8528) expressing membrane targeted GFP primarily in the nervous system. See also Extended Data Fig. 18d and Fig. 4i–m. Data (mean ± standard deviation) are derived from 45 measurements (3 animals, 15 planes per animal). Given the ~2.3-fold improvement laterally and ~2.6-fold improvement axially, the triple-view 2D SIM (253 × 253 × 322 nm3) result offers a volume resolution improvement of ~13.8-fold over the raw view C data (601 × 56 × 836 nm3). d) Apparent widths (open circles, mean ± standard deviation) of 8 actin fibres from the cell presented in Fig. 4n–q, comparing lateral (left) and axial (right) full width at half maximum in different microscope modalities. Given the ~2.2-fold improvement laterally and ~2.4-fold improvement axially, the triple-view 2D SIM result offers a volume resolution improvement of ~11.6-fold over the raw view C data.

Extended Data Fig. 18 Multiview super-resolution imaging of larval worm.

a) Maximum intensity projection of fixed L2 stage larval worm expressing membrane targeted GFP primarily in the nervous system, imaged in triple-view 2D SIM mode. Anatomy as highlighted. b) Higher magnification views (single slices 6 μm into volume) of dashed green rectangle in a), highlighting VNC neurons as viewed in diffraction-limited View C (upper), triple-view 1D SIM obtained by processing 15 volumes (5 per view, middle), and triple-view 2D SIM mode (3 volumes, 1 per view, lower). c) Line profiles corresponding to b1–b3 in b). d) Lateral (upper) and axial (bottom) maximum intensity projections from anaesthetized L4 stage larval worm expressing the same marker, comparing dense nerve ring region imaged in diffraction-limited view C (left), view C 2D SIM mode (middle), and iSIM (right). Purple, red arrows highlight labelled cell bodies or membranous protrusions for comparison. See also Fig. 4i–m. e) Fixed C. elegans L2 larvae (strain DCR6681) expressing GFP-membrane marker imaged in commercial OMX 3D SIM system. A single slice ~2 μm from the bottom surface of the worm is shown, derived from 5 μm stack. Raw data (top) and reconstruction (bottom) are shown. No modulation is evident in raw data, and reconstruction shows obvious artifacts (red arrows). Scale bars: a, d) 10 μm; b, e) 5 μm.

Extended Data Table 1 Characterization of imaging field

Supplementary information

Supplementary Information

The Supplementary Information includes (1) legends for Supplementary Videos 1–8 (page 3); (2) Supplementary Methods (pages 4–23); and (3) Supplementary Table (pages 24–27).

Reporting Summary

Peer Review File

Supplementary Video 1

See Supplementary Information for description.

Supplementary Video 2

See Supplementary Information for description.

Supplementary Video 3

See Supplementary Information for description.

Supplementary Video 4

See Supplementary Information for description.

Supplementary Video 5

See Supplementary Information for description.

Supplementary Video 6

See Supplementary Information for description.

Supplementary Video 7

See Supplementary Information for description.

Supplementary Video 8

See Supplementary Information for description.

Source data

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, Y., Han, X., Su, Y. et al. Multiview confocal super-resolution microscopy. Nature 600, 279–284 (2021). https://doi.org/10.1038/s41586-021-04110-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-021-04110-0

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing