Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

Analyzing complex single-molecule emission patterns with deep learning

Abstract

A fluorescent emitter simultaneously transmits its identity, location, and cellular context through its emission pattern. We developed smNet, a deep neural network for multiplexed single-molecule analysis to retrieve such information with high accuracy. We demonstrate that smNet can extract three-dimensional molecule location, orientation, and wavefront distortion with precision approaching the theoretical limit, and therefore will allow multiplexed measurements through the emission pattern of a single molecule.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: The concept of training and inference with smNet and a description of the basic architecture.
Fig. 2: 3D SMSN reconstruction using smNet, and comparison of its error radii with the CRLB for aberrated astigmatism and double-helix PSFs.
Fig. 3: smNet measures wavefront distortions.

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon request. Example data are available in the Supplementary Data and Supplementary Software packages.

References

  1. Moerner, W. E. & Fromm, D. P. Rev. Sci. Instrum. 74, 3597–3619 (2003).

    Article  CAS  Google Scholar 

  2. von Diezmann, A., Shechtman, Y. & Moerner, W. E. Chem. Rev. 117, 7244–7275 (2017).

    Article  Google Scholar 

  3. Backlund, M. P., Lew, M. D., Backer, A. S., Sahl, S. J. & Moerner, W. E. ChemPhysChem 15, 587–599 (2014).

    Article  CAS  Google Scholar 

  4. Moon, S. et al. J. Am. Chem. Soc. 139, 10944–10947 (2017).

    Article  CAS  Google Scholar 

  5. Burke, D., Patton, B., Huang, F., Bewersdorf, J. & Booth, M. J. Optica 2, 177–185 (2015).

    Article  CAS  Google Scholar 

  6. Andrews, N. L. et al. Nat. Cell Biol. 10, 955–963 (2008).

    Article  CAS  Google Scholar 

  7. Femino, A. M., Fay, F. S., Fogarty, K. & Singer, R. H. Science 280, 585–590 (1998).

    Article  CAS  Google Scholar 

  8. Ha, T. et al. Proc. Natl. Acad. Sci. USA 93, 6264–6268 (1996).

    Article  CAS  Google Scholar 

  9. Baddeley, D. & Bewersdorf, J. Annu. Rev. Biochem. 87, 965–989 (2018).

    Article  CAS  Google Scholar 

  10. Sage, D. et al. Nat. Methods 12, 717–724 (2015).

    Article  CAS  Google Scholar 

  11. Pavani, S. R. et al. Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).

    Article  CAS  Google Scholar 

  12. Babcock, H. P. & Zhuang, X. Sci. Rep. 7, 552 (2017).

    Article  Google Scholar 

  13. Li, Y. et al. Nat. Methods 15, 367–369 (2018).

    Article  CAS  Google Scholar 

  14. LeCun, Y., Bengio, Y. & Hinton, G. Nature 521, 436–444 (2015).

    Article  CAS  Google Scholar 

  15. Bowen, B. P., Scruggs, A., Enderlein, J., Sauer, M. & Woodbury, N. J. Phys. Chem. A 108, 4799–4804 (2004).

    Article  CAS  Google Scholar 

  16. Zhang, Y. et al. Protein Cell 4, 598–606 (2013).

    Article  Google Scholar 

  17. He, K., Zhang, X., Ren, S. & Sun, J. in Proc. 29th IEEE Conference on Computer Vision and Pattern Recognition: CVPR 2016 (eds Agapito, L. et al.) 770–778 (IEEE, Piscataway, NJ, 2016).

  18. Liu, S., Kromann, E. B., Krueger, W. D., Bewersdorf, J. & Lidke, K. A. Opt. Express 21, 29462–29487 (2013).

    Article  Google Scholar 

  19. Cutler, P. J. et al. PLoS One 8, e64320 (2013).

    Article  Google Scholar 

  20. Ji, N. Nat. Methods 14, 374–380 (2017).

    Article  CAS  Google Scholar 

  21. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Proc. IEEE Inst. Electr. Electron. Eng. 86, 2278–2324 (1998).

    Article  Google Scholar 

  22. Nielsen, M. A. Neural Networks and Deep Learning. (Determination Press, San Francisco, 2015).

    Google Scholar 

  23. Ioffe, S. & Szegedy, C. in Proc. 32nd International Conference on Machine Learning Vol 37 (eds Bach, F. & Blei, D.) 448–456 (JMLR/Microtome Publishing, Cambridge, MA, 2015).

    Google Scholar 

  24. He, K., Zhang, X., Ren, S. & Sun, J. in Proc. 2015 IEEE International Conference on Computer Vision (ICCV) (eds Bajcsy, R. et al.) 1026–1034 (IEEE, Piscataway, NJ, 2015).

  25. Huang, F. et al. Cell 166, 1028–1040 (2016).

    Article  CAS  Google Scholar 

  26. Wang, B. & Booth, M. J. Opt. Commun. 282, 4467–4474 (2009).

    Article  CAS  Google Scholar 

  27. Wyant, J. C. & Creath, K. in Applied Optics and Optical Engineering Vol XI (eds Shannon, R. R. & Wyant, J. C.) 1–53 (Academic, New York, 1992).

    Google Scholar 

  28. Smith, C. S., Joseph, N., Rieger, B. & Lidke, K. A. Nat. Methods 7, 373–375 (2010).

    Article  CAS  Google Scholar 

  29. Hanser, B. M., Gustafsson, M. G. L., Agard, D. A. & Sedat, J. W. J. Microsc. 216, 32–48 (2004).

    Article  CAS  Google Scholar 

  30. Piestun, R., Schechner, Y. Y. & Shamir, J. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 17, 294–303 (2000).

    Article  CAS  Google Scholar 

  31. Ober, R. J., Ram, S. & Ward, E. S. Biophys. J. 86, 1185–1200 (2004).

    Article  CAS  Google Scholar 

  32. Liu, S. & Lidke, K. A. ChemPhysChem 15, 696–704 (2014).

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We thank A.J. Schaber for help with initial data acquisition and the Bindley Bioscience Center at Purdue, a core facility of the NIH-funded Indiana Clinical and Translational Sciences Institute. We thank K.A. Lidke and M.J. Wester from the University of New Mexico for an initial contribution to the PSF toolbox software. We thank E.B. Kromann (Technical University of Denmark) for sharing the phase unwrapping code. We thank C. Haig (Hamamatsu Photonics K.K.) and K.F. Ziegler for their help on the project, and P.-M. Ivey, Y. Li, and F. Xu for suggestions on the manuscript. P.Z., S.L., M.J.M., D.M., and F.H. were supported by the NIH (grant R35 GM119785) and DARPA (grant D16AP00093).

Author information

Authors and Affiliations

Authors

Contributions

P.Z., S.L., and A.C. developed the algorithm. P.Z. and S.L. wrote the software, performed the experiments, and analyzed the data. P.Z., S.L., and D.M. prepared the specimens. M.J.M. and D.M. constructed the imaging systems. P.Z. and F.H. conceived the study. E.C. and F.H. supervised the study. All authors wrote the manuscript.

Corresponding author

Correspondence to Fang Huang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1 Basic deep neural architecture of smNet and the achieved precision and accuracy in single-molecule localization.

(a) Basic deep neural architecture of smNet. The example input is a 3D image of C × N × N pixels (image size N = 16 or 32, number of channels C = 1 or 2 for single plane or biplane PSFs). Through a chain of neural network layers represented by the colored rectangular blocks of which the depths scaled with feature map (channel) numbers, smNet generates the output estimation. Each layer is connected by a 3D convolution kernel. Colors represent different building blocks of the architecture and the details are shown in the corresponding boxes. Two types of residual blocks (orange and purple) were implemented with the number of feature maps unchanged (p = q) and changed (pq) respectively. Supplementary Table 1 summarizes the detailed sizes of the smNet architecture and its variations. Detailed considerations and modifications on architectures can be found in Supplementary Notes 2, 8 and 9. (b) Precision and accuracy of the localization estimations with smNet based on PSFs with experimentally measured aberrations. The aberrated PSF images were generated from an experimentally retrieved pupil function (including aberrations at 12 µm above the coverslip surface, see Supplementary Note 5 for detail). The parameters used for PSF generation were NA = 1.35, λ = 690 nm, pixel size of 113 nm, subregion size of 16 × 16 pixels, and the refractive index of the immersion medium equal to 1.406. The centers and error bars are mean and s.t.d. of 1000 biases respectively, and each bias is calculated from 1000 PSFs. (c) Precision and accuracy of the localization estimations with smNet on simulated double-helix PSFs (Supplementary Note 5). The observed precision of smNet closely approaches √CRLB with an averaged absolute localization bias: 1 ± 0.84 nm (s.t.d) in x, y, and z within the axial range of −1000 nm to 1000 nm. A total of 2,100,000 images are analyzed for each plot (Supplementary Table 3). The parameters used for PSF generation were NA = 1.49, λ = 690 nm, pixel size of 100 nm, subregion size of 32 × 32 pixels, and the refractive index of the immersion medium equal to 1.52. The centers and error bars are mean and s.t.d. of 100 biases respectively, and each bias is calculated from 1000 PSFs.

Supplementary Figure 2 Diagram of the smNet training algorithm.

All steps in training and validation are described in Supplementary Note 3.

Supplementary Figure 3 Visualization of smNet kernels and outputs at each layer.

(a) Visualization of the first two convolutional layers’ kernels of smNet trained with PSFs shown in Supplementary Fig. 7B. (b) Visualization of outputs of intermediate steps from smNet when estimating axial position for a single PSF. 20 experiments were repeated independently with similar result.

Supplementary Figure 4 The bias and precision achieved by smNet with different data sizes (75% for training and 25% for validation) for astigmatism and double-helix PSFs.

(a) smNet localization bias and precision with data sizes (75% for training and 25% for validation) at axial positions from −500 nm to 500 nm for astigmatism PSF (modeled with measured aberration at 1 µm from the bottom coverslip surface). A phase retrieved pupil function from the experimentally recorded PSFs, obtained by imaging a bead at the cover glass, was used for data simulation. And an index mismatch aberration phase at a stage position of 1 µm was added to the pupil function (Supplementary Notes 5 and 10). The centers and error bars are mean and s.t.d. of 1000 biases respectively, and each bias is calculated from 1000 PSFs. (b) smNet localization bias and precision with different data sizes (75% for training and 25% for validation) at axial positions from −1000 nm to 1000 nm for simulated double-helix PSF. The centers and error bars are mean and s.t.d. of 100 biases respectively, and each bias is calculated from 1000 PSFs. For testing, 1000 (or 100 in B) PSF images were generated with random x, y positions at each axial position from −500 nm to 500 nm (or −1000 nm to 1000 nm in b) with a step size of 100 nm. Each of those PSFs was replicated 1000 times. The total photon count of each PSF was 2000 and the background photon count of each pixel was 10.

Supplementary Figure 5 Comparison of localization results for experimentally obtained complex PSFs using the phase retrieval (PR) method, cubic spline, and smNet.

(a) Step plots of z localizations using smNet (blue cross), phase retrieval method (green cross) and spline fit method (orange cross). The complex PSFs were acquired by moving the sample stage along z-axis from −1000 nm to 1000 nm, with a step size of 100 nm, and 20 frames per z position. An initial guess based on cross correlation of a stack of pre-generated PSF images was used for PR method. The localization based on PR method was performed on CPU instead of GPU18 (Supplementary Note 11). This is due to the complexity of the pupil function, which requires all 64 Zernike polynomials, using Matlab FFT function is faster than calculating the integrals of all Zernike polynomials on GPU. The localization speeds of PR method used here and the spline fit method are 0.9 s and 0.01 s per sub-region respectively. (b) Comparison of localization deviations and precision from smNet, PR method and spline fit method. The deviations in x and y were calculated from a reference (x, y) position, which is the median of all localizations. We used storm analysis software package (https://github.com/ZhuangLab/storm-analysis)12 for spline fit. The parameter values we used for spline fit on recorded complex PSFs were listed as follows: aoi_size = 32, pixel_size = 0.13, z_range = 1.5, z_step = 0.05 and s_size = 30 for generating measured PSF (.psf) and spline coefficients (.spline); and camera_gain = 1, camera_offset = 0, pixel_size = 0.13, z_value = [−1, −0.5, 0, 0.5, 1], min_z, = −1.4, max_z = 1.4, find_max_radius = 20, sigma = 1 and threshold = 3 for generating spline fit results (.hdf5). Three experiments (420 PSFs were localized in each experiment) were repeated independently with similar result.

Supplementary Figure 6 Simulated complex PSFs at various axial positions.

The simulated complex PSFs were generated from Fourier transform of a designed pupil function (Supplementary Note 5). The pupil function was constructed with Zernike polynomials 5, 16 and 19 (Wyant ordering26), with amplitudes of 0.48, 0.64 and 0.48 (unit: λ) respectively. The parameters used for PSF generation were NA = 1.4, λ = 690 nm, pixel size of 130 nm, subregion size of 32 × 32 pixels, and the refractive index of the immersion medium equal to 1.52.

Supplementary Figure 7 Trained smNet measures complex PSFs.

(a) Axial position estimation of simulated complex PSFs (1000 PSFs per step) in comparison with the ground truth. Sub-plots (i-iii) are enlarged images from the orange boxes in the main figure. (b) Axial position estimation of experimentally obtained complex PSFs. The red solid line represents the average of axial localizations at each step. Sub-plots (i-iii) are enlarged images from the orange boxes in the main figure. Three experiments (420 PSFs localized per experiment) were repeated independently with similar result. (c) Quantification of localization precision and bias achieved from the simulated complex PSFs in A. (d) Localization precision achieved and estimated step sizes (from averaged z localizations) from the recorded complex PSFs shown in B. See Supplementary Figs. 5 and 6 and Supplementary Table 3 for parameter settings for C and D.

Supplementary Figure 8 Error-radius ratios for various photon and background conditions.

(a) Error radius ratios between smNet and CRLB (Supplementary Note 7) for various low signal-to-noise ratio (SNR) photon and background. The area bounded by the solid magenta boundary is photon-background pairs where information is insufficient, thus smNet slightly diverged from CRLB. We also observe that the error radius ratios are smaller than 1 in the solid red boundary area and this is because the outputs of smNet are bounded by HardTanh (Supplementary Notes 8 and 9) whereas CRLB predicted precision values in these extremely low SNR conditions are extremely large. (b) Visualization of the PSF images at the three dashed-line boxes in A. 11000 PSFs of the same signal-to-noise ratios were analyzed for each photon count and background count in A. (c) Error-radius ratios from PSFs with 500 background counts when smNet was trained with different background ranges. The heatmap on the left shows the error radius-ratios from the smNet trained with a background range of 1 to 30. Most values are larger than one. However, if the smNet trained with a background range of 1 to 600 (the heatmap on the right), the resulting error-radius ratios are close to one. The error-radius ratios that are smaller than one are caused by HardTanh cut-off (Supplementary Note 9). For each total photon/background pair, 1000 astigmatism PSFs with random x, y positions (in the range of −1.5 to 1.5 pixels to the center) were simulated at z positions from −500 nm to 500 nm, with a step size of 100 nm (Supplementary Table 3). The training condition can be found in Supplementary Tables 2 and 4. A phase retrieved pupil function from the experimentally recorded PSFs, obtained by imaging a bead at the cover glass, was used for data simulation. And an index mismatch aberration phase at a stage position of 1 µm was added to the pupil function (Supplementary Notes 5 and 10).

Supplementary Figure 9 smNet performance on different levels of nonuniform background for astigmatism PSFs with measured aberrations at 1 µm from the bottom coverslip surface.

(a) Example PSF images with non-uniform background generated from Perlin noise at various average background values and spatial inhomogeneous levels (indicated by standard deviation across all the pixel values in each sub-region), as well as the ones with uniform background. 11000 PSFs of the same kind were analyzed in B. (b) Localization bias and precision of PSF data with various inhomogeneous levels of non-uniform background. For non-uniform background with low inhomogeneous (Perlin noise s.t.d = 1, magenta), the localization biases are close to zero and the localization precisions approach to √CRLB (green), while for non-uniform background with high inhomogeneous level (Perlin noise s.t.d = 2 to 14, blue, corresponding to the background levels shown in A), the localization errors increase significantly. For each background level, PSF data were generated at z positions from −500 to 500 nm, with 100 nm intervals, and 1000 sub-regions per z position. A phase retrieved pupil function from the experimentally recorded PSFs, obtained by imaging a bead at the cover glass, was used for data simulation. And an index mismatch aberration phase at a stage position of 1 µm was added to the pupil function (Supplementary Notes 5 and 10). The centers and error bars are mean and s.t.d. respectively (n = 1000).

Supplementary Figure 10 Error radii for different levels of index mismatches and coma aberrations for smNet trained with astigmatism PSFs (modeled with measured aberration at 12 µm from the bottom coverslip surface).

(a) Error radii (Supplementary Note 7) verses index mismatch aberration at different stage positions (z planes). For each z plane, the test data was composed of 1000 PSFs with random x, y positions at each axial position in a range of −500 nm to 500 nm (relative to a specific z plane, Supplementary Note 5) with a step size of 100 nm (11000 images per z plane). (b) Error radii verses different amounts of coma aberration (Zernike polynomial 6, Wyant ordering26). For each coma aberration, the test data was composed of 1000 PSFs with random x, y positions at each axial position in a range of −500 nm to 500 nm (relative to the z plane at 12 µm) with a step size of 100 nm (11000 images per z plane). The total photon count of each emitter was 2000 and the background photon of each pixel was 10.

Supplementary Figure 11 3D SMSN reconstruction of TOM20 protein in COS-7 cells at 1 μm above the bottom coverslip surface.

(a) x-y projection of the 3D reconstructed image, color represents the relative z position (Supplementary Note 16). The image is representative of 4 independently repeated experiments. (b) Axial cross sections of the selected regions (red boxes in A) from images generated by smNet and Gaussian-based localization algorithm (Supplementary Note 12). The data are representative of 4 independently repeated experiments. (c) Intensity profiles along the dash lines in B. The data are representative of 4 independently repeated experiments.

Supplementary Figure 12 Estimation of x, y, z, polar angle (α), and azimuthal angle (β) for simulated dipole emitters.

(a) Precision and accuracy achieved by smNet and the corresponding CRLB for all five dimensions (x, y, z, α, β) at axial locations z = −300 nm, z = 0 nm and z = 300 nm. The test data at each axial position was generated with a dipole PSF at the sub-region center, alpha from 5° to 90° with a step size of 5°, and beta at 90°. (b) Comparison of localization precision and bias of azimuthal angle (β) under various polar angles (α). The theoretical estimation precisions (√CRLB, solid lines) of β mainly depend on α. At β close to 0° and 360° and α smaller than 90°, the estimation errors from smNet increase significantly because of the phase wrapping of 360° and similarly at α = 90°, there is a phase wrapping of 180° at each β position. However, after phase unwrapping (Supplementary Note 15), the localization precisions match well with the √CRLB and the biases close to zero. The simulated data were analyzed by three smNets that were trained at β ranges from 0° to 120°, 100° to 260° and 220° to 360° separately (Supplementary Table 4). And a selection method based on log likelihood ratio (LLR, Supplementary Note 15) was applied to the output results from each smNet and the one with the minimum LLR was selected for each sub-region as the estimation of β. The data were simulated at β ranges from 0° to 350°, with a step size of 25° for each α value, and 1000 sub-regions were generated for each condition (Supplementary Table 3). The total photon count of each PSF was 8000 and the background photon count of each pixel was 5. The parameters used for PSF generation were NA = 1.4, λ = 690 nm, pixel size of 50 nm, subregion size of 32 × 32 pixels, and the refractive indices of the immersion medium, the sample medium and the cover glass equal to 1.52, 1.35, and 1.52 respectively. The training range and conditions can be found in Supplementary Tables 2 and 4. See Supplementary Note 17 for detailed discussion of smNet performance on dipole PSFs. (c) Step plot of azimuthal angle β estimation when polar angle α equals to 10°, 50° and 90° respectively before phase unwrapping. 1000 PSFs were analyzed per step. The simulated data (c, d) were analyzed by three smNets that were trained at β ranges from 0° to 120°, 100° to 260° and 220° to 360° separately (Supplementary Table 4). And a selection method based on log likelihood ratio (LLR, Supplementary Note 15) was applied to the output results from each smNet and the one with the minimum LLR was selected for each sub-region as the estimation of β. See Methods and Supplementary Table 3 for simulation details and parameters.

Supplementary Figure 13 smNet measurements of wavefront distortions and comparison with the ground truth.

(a,b) smNet measurement of wavefront shape composed of 12 or 21 Zernike modes through simulated PSFs. The amplitude of each mode is randomly sampled from −159.15 to 159.15 mλ (see Methods and Supplementary Table 3). The nine example PSFs (upper left for each test) are randomly picked from the 1st planes of the 1000 biplane PSFs. The parameters used for PSF generation were NA = 1.2, λ = 690 nm, pixel size of 130 nm, sub-region size of 16 × 16 pixels and plane separation of 500 nm. The refractive indices of the immersion medium and the sample medium are equal to 1.33. The centers and error bars are mean and s.e.m. respectively (n = 1000).

Supplementary Figure 14 Comparison of wavefront estimation results of bead data using smNet and phase retrieval (PR) methods.

The bead data for smNet were recorded by imaging well isolated fluorescence beads (100 nm, crimson bead, custom-designed, Invitrogen) on coverslip at stage positions from −400 to 400 nm, with a step size of 10 nm and 10 frames per step, with a frame rate of 50 Hz. The same sample was used for phase retrieval (PR), and the data were recorded at stage positions from −1 to 1 µm, with a step size of 100 nm and 1 frames per step, with a frame rate of 10 Hz. The acquisition parameters were chosen based on the training range of smNet and the optimized performance for PR. A deformable mirror was used to introduce a certain type of aberration to the bead sample by setting the corresponding input-voltage amplitude to 1 A.U. (arbitrary unit). Flat means no aberration was introduced. An average of 5 to 9 measurements and 2 measurements for smNet and PR respectively were used for each plot. The results from both methods are consistent, and a subtle difference between them indicates uncertainties in aberration estimations using either smNet or PR. The parameters used for training-PSF generation were NA = 1.4, λ = 690 nm, pixel size of 122 nm, sub-region size of 16 × 16 pixels, plane separation of 386 nm (measured from phase retrieval method18), and the refractive indices of the immersion medium equal to 1.516.

Supplementary Figure 15 Aberration estimations for blinking Alexa Fluor 647 on a coverslip with manually introduced wavefront shapes.

(a,c) Time trace of dynamic aberration estimation. During the data acquisition, a certain type of aberration was introduced by a deformable mirror (DM) by setting the corresponding input-voltage amplitude to 1 A.U. (arbitrary unit). Flat means no aberration was introduced. The estimated wavefront distortions are shown in the pupil images above each time trace. (b,d) The corresponding estimated aberration (in terms of Zernike polynomials) amplitudes of each pupil function shown in A and C. The data were collected at a frame rate of 50 Hz. A small time delay in alternating two aberration settings (e.g. DAst and Ast) is caused by the voltage amplitude adjustment of the DM. The parameters used for training PSF generation were NA = 1.4, λ = 690 nm, pixel size of 122 nm, sub-region size of 16 × 16 pixels, plane separation of 386 nm (measured from phase retrieval18), and the refractive indices of the immersion medium equal to 1.516. Eight experiments were repeated independently with similar results.

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–15, Supplementary Tables 1–4 and Supplementary Notes 1–17

Reporting Summary

Supplementary Data

Raw data frames and aberration estimation results of the first 2,000 frames in Supplementary Video 3.

Supplementary Software

smNet software package. This software package is compressed as a .zip file. It includes LuaJIT source code for training smNet, MATLAB source code for generating various PSF models and corresponding calculation of CRLB, MATLAB source code for phase retrieval, and MATLAB source code for estimation of total photon and background photon counts. Instruction files describing the usage of the software packages are also included.

Supplementary Video 1

Aberration estimations on simulated single-molecule emission patterns with different wavefront distortions. The top left panel shows simulated biplane PSFs. The top right panel shows smNet estimated wavefront distortions and their differences from the ground truth by averaging 1 to 100 sub-regions. The ground truth wavefront is displayed in red box before changing to the next wavefront shape. The bottom panel shows smNet estimations on the amplitudes of 12 Zernike modes after averaging 1 to 100 sub-regions comparing to the ground truth amplitudes. The centers and error bars represent the mean and s.e.m. of the result based on 1 to 100 sub-regions.

Supplementary Video 2

Time series of wavefront estimation during multi-section SMSN imaging of immunolabeled TOM20 in COS-7 cells. The top left panel shows raw single molecule blinking frames using a biplane detection scheme. The top right panel shows the wavefront estimation (pupil) and its change (pupil diff) for each axial plane. The middle panel shows the time traces of the amplitude estimations on five different Zernike modes. The bottom panel shows the dynamic estimations of 12 Zernike amplitudes for each axial plane. A total of 11 optical sections were imaged, each of which contained 2,000 raw single-molecule blinking camera frames.

Supplementary Video 3

Time series of wavefront estimation imaging Alexa Fluor 647 on a coverslip with alternating astigmatism and diagonal astigmatism introduced using a deformable mirror during the data acquisition. The top right panel shows the dynamic wavefront estimation (pupil) and its change (pupil diff). The middle panel shows the time traces of the amplitude estimations of the two Zernike modes. The bottom panel shows the dynamic amplitude estimations of 12 Zernike modes. Eight experiments were repeated independently with similar results.

Supplementary Video 4

Time series of aberration estimation imaging Alexa Fluor 647 on a coverslip with alternating astigmatism, diagonal astigmatism, coma x, coma y, and first and second spherical aberration introduced with a deformable mirror during the data acquisition. The top right panel shows the dynamic wavefront estimation (pupil) and its change (pupil diff). The middle panel shows the time traces of the amplitude estimations of the five Zernike modes. The bottom panel shows the dynamic amplitude estimations of 12 Zernike modes. Eight experiments were repeated independently with similar results.

Supplementary Video 5

Time series of aberration estimation imaging Alexa Fluor 647 on a coverslip with alternating coma x and coma y introduced with a deformable mirror during the data acquisition. The top right panel shows the dynamic wavefront estimation (pupil) and its change (pupil diff). The middle panel shows the time traces of the amplitude estimations of the two Zernike modes. The bottom panel shows the dynamic amplitude estimations of 12 Zernike modes. Eight experiments were repeated independently with similar results.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, P., Liu, S., Chaurasia, A. et al. Analyzing complex single-molecule emission patterns with deep learning. Nat Methods 15, 913–916 (2018). https://doi.org/10.1038/s41592-018-0153-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-018-0153-5

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing