Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Whole-cell organelle segmentation in volume electron microscopy

Abstract

Cells contain hundreds of organelles and macromolecular assemblies. Obtaining a complete understanding of their intricate organization requires the nanometre-level, three-dimensional reconstruction of whole cells, which is only feasible with robust and scalable automatic methods. Here, to support the development of such methods, we annotated up to 35 different cellular organelle classes—ranging from endoplasmic reticulum to microtubules to ribosomes—in diverse sample volumes from multiple cell types imaged at a near-isotropic resolution of 4 nm per voxel with focused ion beam scanning electron microscopy (FIB-SEM)1. We trained deep learning architectures to segment these structures in 4 nm and 8 nm per voxel FIB-SEM volumes, validated their performance and showed that automatic reconstructions can be used to directly quantify previously inaccessible metrics including spatial interactions between cellular components. We also show that such reconstructions can be used to automatically register light and electron microscopy images for correlative studies. We have created an open data and open-source web repository, ‘OpenOrganelle’, to share the data, computer code and trained models, which will enable scientists everywhere to query and further improve automatic reconstruction of these datasets.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Training data and machine learning.
Fig. 2: Network evaluations and refined predictions.
Fig. 3: Analysis and biological insight.
Fig. 4: Scaling predictions and CLEM auto-registration.

Similar content being viewed by others

Data availability

All data generated and analysed during this study can be found and explored through https://openorganelle.janelia.org/. For queries and feedback on the project, please email cosemdata@janelia.hhmi.org. Source data are provided with this paper.

Code availability

All software and source code generated during this study can be found at https://github.com/janelia-cosem/heinrich-2021a/tree/dff2e07.

References

  1. Xu, C. S. et al. An open-access volume electron microscopy atlas of whole cells and tissues. Nature https://doi.org/10.1038/s41586-021-03992-4 (2021).

  2. Valm, A. M. et al. Applying systems-level spectral imaging and analysis to reveal the organelle interactome. Nature 546, 162–167 (2017).

    Article  ADS  CAS  Google Scholar 

  3. Turaga, S. C. et al. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Comput. 22, 511–538 (2010).

    Article  Google Scholar 

  4. Ciresan, D., Giusti, A., Gambardella, L. M. & Schmidhuber, J. Deep neural networks segment neuronal membranes in electron microscopy images. In Proc. 25th International Conference on Neural Information Processing Systems (eds Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q.) 2843–2851 (Curran Associates, 2012).

  5. Januszewski, M. et al. High-precision automated reconstruction of neurons with flood-filling networks. Nat. Methods 15, 605–610 (2018).

    Article  CAS  Google Scholar 

  6. Funke, J. et al. Large scale image segmentation with structured loss based deep learning for connectome reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 41, 1669–1680 (2019).

    Article  Google Scholar 

  7. Kreshuk, A. et al. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images. PLoS ONE 6, e24899 (2011).

    Article  ADS  CAS  Google Scholar 

  8. Becker, C., Ali, K., Knott, G. & Fua, P. Learning context cues for synapse segmentation in EM volumes. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2012 (eds Ayache, N., Delingette, H., Golland, P. & Mori, K.) 585–592 (Springer, 2012).

  9. Dorkenwald, S. et al. Automated synaptic connectivity inference for volume electron microscopy. Nat. Methods 14, 435–442 (2017).

    Article  CAS  Google Scholar 

  10. Heinrich, L., Funke, J., Pape, C., Nunez-Iglesias, J. & Saalfeld, S. Synaptic cleft segmentation in non-isotropic volume electron microscopy of the complete Drosophila brain. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018 (eds Frangi, A. et al.) 317–325 (Springer, 2018).

  11. Buhmann, J. et al. Automatic detection of synaptic partners in a whole-brain Drosophila EM dataset. Nat. Methods 18, 771–774 (2021)

    Article  CAS  Google Scholar 

  12. Lucchi, A., Li, Y., Smith, K. & Fua, P. Structured image segmentation using kernelized features. In Computer Vision—ECCV 2012 (eds Fitzgibbon, A. et al.) 400–413 (Springer, 2012).

  13. Kasthuri, N. et al. Saturated reconstruction of a volume of neocortex. Cell 162, 648–661 (2015).

    Article  CAS  Google Scholar 

  14. Lucchi, A. et al. Learning structured models for segmentation of 2-D and 3-D imagery. IEEE Trans. Med. Imaging 34, 1096–1110 (2015).

    Article  Google Scholar 

  15. Márquez Neila, P. et al. A fast method for the segmentation of synaptic junctions and mitochondria in serial electron microscopic images of the brain. Neuroinformatics 14, 235–250 (2016).

    Article  Google Scholar 

  16. Oztel, I., Yolcu, G., Ersoy, I., White, T. & Bunyak, F. Mitochondria segmentation in electron microscopy volumes using deep convolutional neural network. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 1195–1200 (IEEE, 2017).

  17. Cetina, K., Buenaposada, J. M. & Baumela, L. Multi-class segmentation of neuronal structures in electron microscopy images. BMC Bioinformatics 19, 298 (2018).

    Article  Google Scholar 

  18. Casser, V., Kang, K., Pfister, H. & Haehn, D. Fast mitochondria detection for connectomics. In Proceedings of Machine Learning Research (PMRL). (Eds Arbel, T. et al.) 121, 111–120 (2020).

  19. Wei, D. et al. MitoEM dataset: large-scale 3D mitochondria instance segmentation from EM images. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2020 (eds Martel, A. L. et al.) 66–76 (Springer, 2020).

  20. Perez, A. J. et al. A workflow for the automatic segmentation of organelles in electron microscopy image stacks. Front. Neuroanat. 8, 126 (2014).

    Article  Google Scholar 

  21. Tek, F. B., Boray Tek, F., Kroeger, T., Mikula, S. & Hamprecht, F. A. Automated cell nucleus detection for large-volume electron microscopy of neural tissue. In 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI) 69–72 (IEEE, 2014).

  22. Narasimha, R., Ouyang, H., Gray, A., McLaughlin, S. W. & Subramaniam, S. Automatic joint classification and segmentation of whole cell 3D images. Pattern Recognit. 42, 1067–1079 (2009).

    Article  ADS  Google Scholar 

  23. Rigamonti, R., Lepetit, V. & Fua, P. Beyond KernelBoost. https://infoscience.epfl.ch/record/200378/files/rigamonti_tr14a_1.pdf (2014).

  24. Karabağ, C. et al. Segmentation and modelling of the nuclear envelope of HeLa cells imaged with serial block face scanning electron microscopy. J. Imaging 5, 75 (2019).

    Article  Google Scholar 

  25. Spiers, H. et al. Deep learning for automatic segmentation of the nuclear envelope in electron microscopy data, trained with volunteer segmentations. Traffic 22, 240–253 (2021).

    Article  CAS  Google Scholar 

  26. Žerovnik Mekuč, M. et al. Automatic segmentation of mitochondria and endolysosomes in volumetric electron microscopy data. Comput. Biol. Med. 119, 103693 (2020).

    Article  Google Scholar 

  27. Liu, J. et al. Automatic reconstruction of mitochondria and endoplasmic reticulum in electron microscopy volumes by deep learning. Front. Neurosci. 14, 599 (2020).

    Article  Google Scholar 

  28. Eckstein, N., Buhmann, J., Cook, M. & Funke, J. Microtubule tracking in electron microscopy volumes. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2020 (eds Martel, A. L. et al.) 99–108 (Springer, 2020).

  29. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. & Frangi, A.) 234–241 (Springer, 2015).

  30. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016 (eds Ourselin, S. et al.) 424–432 (Springer, 2016).

  31. Funke, J., Wu, J., Barnes, C. Waterz—simple watershed and agglomeration library for affinity graphs. GitHub https://github.com/funkey/waterz/tree/7c530ac (2020).

  32. Zlateski, A. & Seung, H. S. Image segmentation by size-dependent single linkage clustering of a watershed basin graph. Preprint at https://arxiv.org/abs/1505.00249 (2015).

  33. Barlan, K. & Gelfand, V. I. Microtubule-based transport and the distribution, tethering, and organization of organelles. Cold Spring Harb. Perspect. Biol. 9, a025817 (2017).

    Article  Google Scholar 

  34. Goyal, U. & Blackstone, C. Untangling the web: mechanisms underlying ER network formation. Biochim. Biophys. Acta Mol. Cell Res. 1833, 2492–2498 (2013).

    Article  CAS  Google Scholar 

  35. Blackstone, C. Cellular pathways of hereditary spastic paraplegia. Annu. Rev. Neurosci. 35, 25–47 (2012).

    Article  CAS  Google Scholar 

  36. Descoteaux, M., Audette, M., Chinzei, K. & Siddiqi, K. Bone enhancement filtering: application to sinus bone segmentation and simulation of pituitary surgery. Comput. Aided Surg. 11, 247–255 (2006).

    Article  Google Scholar 

  37. Nixon-Abell, J. et al. Increased spatiotemporal resolution reveals highly dynamic dense tubular matrices in the peripheral ER. Science 354, aaf3928 (2016).

    Article  Google Scholar 

  38. Terasaki, M. et al. Stacked endoplasmic reticulum sheets are connected by helicoidal membrane motifs. Cell 154, 285–296 (2013).

    Article  CAS  Google Scholar 

  39. Coulter, M. E. et al. The ESCRT-III protein CHMP1A mediates secretion of sonic hedgehog on a distinctive subtype of extracellular vesicles. Cell Rep. 24, 973–986 (2018).

    Article  CAS  Google Scholar 

  40. Hoffman, D. P. et al. Correlative three-dimensional super-resolution and block-face electron microscopy of whole vitreously frozen cells. Science 367, eaaz5357 (2020).

    Article  CAS  Google Scholar 

  41. Xu, C. S. et al. Enhanced FIB-SEM systems for large-volume 3D imaging. Elife 6, e25916 (2017).

    Article  Google Scholar 

  42. Saalfeld, S., Cardona, A., Hartenstein, V. & Tomančák, P. As-rigid-as-possible mosaicking and serial section registration of large ssTEM datasets. Bioinformatics 26, i57–i63 (2010).

    Article  CAS  Google Scholar 

  43. Saalfeld, S., Pisarev, I. Hanslovsky, P., Bogovic, J.A., Champion, A., Rueden, C., Kirkham, J.A. N5—a scalable Java API for hierarchies of chunked n-dimensional tensors and structured meta-data. GitHub https://github.com/saalfeldlab/n5/tree/n5-2.5.1 (2021)

  44. Pavelka, M. & Roth, J. Functional Ultrastructure: Atlas of Tissue Biology and Pathology (Springer, 2015).

  45. Saalfeld, S., Funke, J., Pietzsch T., Nunez-Iglesias, J., Hanslovsky, P., Bogovic, J., Wolny, A., Melnikov, E. BigCAT. GitHub https://github.com/saalfeldlab/bigcat/tree/0.0.3-beta-1 (2018).

  46. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).

  47. Rueden, C., Schindelin, J., Hiner, M., Arganda-Carreras, I., Skeletonize3D. GitHub https://github.com/fiji/Skeletonize3D/tree/Skeletonize3D_-2.1.1 (2017).

  48. Lee, T. C., Kashyap, R. L. & Chu, C. N. Building skeleton models via 3-D medial surface axis thinning algorithms. CVGIP Graph. Models Image Process. 56, 462–478 (1994).

    Article  Google Scholar 

  49. Klein, S., Staring, M., Murphy, K., Viergever, M. A. & Pluim, J. P. W. elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2010).

    Article  Google Scholar 

  50. Maitin-Shepard, J. et al. Neuroglancer. GitHub https://github.com/google/neuroglancer/tree/v2.22 (2021).

  51. Abramov, D. et al. React. GitHub https://github.com/facebook/react/tree/v17.0.2 (2021).

  52. Perlman, E. Visualizing and interacting with large imaging data. Microsc. Microanal. 25, 1374–1375 (2019).

    Article  ADS  Google Scholar 

  53. Hubbard, P. M. et al. Accelerated EM connectome reconstruction using 3D visualization and segmentation graphs. Preprint at https://doi.org/10.1101/2020.01.17.909572 (2020).

  54. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

    Article  CAS  Google Scholar 

  55. Pietzsch, T., Saalfeld, S., Preibisch, S. & Tomancak, P. BigDataViewer: visualization and processing for large image data sets. Nat. Methods 12, 481–483 (2015).

    Article  CAS  Google Scholar 

  56. Bogovic, J.A., Saalfeld, S., Hulbert, C., Pisarev, I., Rueden, C., Moon, HK., Preibisch, S. N5-IJ. GitHub https://github.com/saalfeldlab/n5-ij/tree/n5-ij-3.0.0 (2021).

  57. Amazon Web Services. What is the AWS Command Line Interface? https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html (2021).

Download references

Acknowledgements

This work is part of the COSEM Project Team at Janelia Research Campus, Howard Hughes Medical Institute. The COSEM Project Team consisted of: R. Ali, R. Arruda, R. Bahtra, D. Bennett, D. Nguyen, W. Park and A. Petruncio, led by A. Weigel with Steering Committee of J. Funke, H. Hess, W. Korff, J. Lippincott-Schwartz and S. Saalfeld. We thank R. Ali, R. Arruda and D. Nguyen for their work generating training data; R. Bahtra for his work generating masks of datasets and providing manual annotations; A. Aziz for his work correcting mitochondria overmerging; G. Ihrke and Project Technical Resources for management and coordination and staff support; the Janelia Scientific Computing Shared Resource, especially T. Dolafi and S. Berg, for their help generating the database and visualization tools; C. Pape and J. Nunez-Iglesias for their work on the inference pipeline; V. Custard for administrative support; S. van Engelenburg, H. Hoffman, E. Betzig, D. Hoffman, C. Walsh and M. Coulter for providing their data; Amazon Web Services for free hosting of our data through their open data program; G. Shtengel for providing FIB-SEM data attributes and for his early work manually segmenting organelles and aligning CLEM datasets, which motivated the need for more automated approaches; and G. Meissner for critical reading of the manuscript. This work was supported by Howard Hughes Medical Institute, Janelia Research Campus.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

The COSEM Project Team, S.S., A.V.W., W.K., J.L.-S., H.F.H. and J.F. conceptualized and supervised the project. C.S.X. and H.F.H. provided the FIB-SEM data and preprocessing. D.B., S.P. and C.S.X. organized FIB-SEM data and data attributes. W.P., A.P. and A.V.W. provided manual annotations, evaluations and proofreading. D.B. built the data management infrastructure. L.H. and S.S. developed machine learning algorithms. L.H. performed network training and automatic evaluations. N.E. and J.F. developed MT modelling algorithms. N.E. performed MT modelling. D.A. and S.S. developed refinement and analysis algorithms. D.A. analysed data. J.B. and S.S. developed automated CLEM registration algorithms. J.B. performed automated CLEM registration. D.B., J.C. and A.V.W. developed, D.B. and J.C. implemented and A.V.W., D.B. and S.P. proofread and contributed data to the data portal, OpenOrganelle. L.H., A.V.W., S.S., D.B., D.A., J.B., N.E. and A.P. wrote the manuscript with input from all co-authors. J.L.-S. provided critical review, commentary and revision in the writing process of the manuscript.

Corresponding authors

Correspondence to Stephan Saalfeld or Aubrey V. Weigel.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature thanks Robert Murphy, Jason Swedlow and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended Data Fig. 1 Organelle classification.

Examples of each class used for training input from multiple datasets. Organelles were manually identified using morphological features established in the literature. A description of each class can be found in the Supplementary Methods. Scale bars are 100 nm.

Extended Data Fig. 2 Class frequencies and holdout block.

a, 37 different classes are used to classify all intracellular structures. These classes are combined into 35, potentially overlapping semantic categories (see Supplementary Methods, Extended Data Fig. 1). Classes in bold depict these super classes. As an example, the ER object class is expanded in the subpanel. In green are classes that are predicted jointly by type ‘many’ networks. Denoted with a dot are the super classes, with a few additional classes, used in the type ‘few’ networks. The type ‘all’ networks train jointly on all 35 classes. b, Annotated volume according to datasets. Reported are the percentage of the total cell that is annotated. c, d, 3D rendering of thresholded predictions (c) and ground truth (d) in a 4 μm × 4 μm × 4 μm holdout block in jrc_hela-3. Shown are the nucleus (magenta), plasma membrane (grey), ER and NE (green), mitochondria (orange), vesicles (red), lysosomes (yellow), endosomes (blue), and microtubules (white).

Source data

Extended Data Fig. 3 Network evaluations and refined predictions.

a, Validation and test performance measured by F1 score for thresholded predictions on holdout blocks from four datasets. Manual validation refers to F1 score of inferences with settings optimized manually on the whole dataset. Labels sorted by average test score. b, Comparison of networks using different multi-class strategies. Each data point represents the F1 score (test performance) on a holdout block with the colour denoting the multi-class strategy (‘all’/’many’/’few’). c, 3D rendering of refined predictions for each dataset. Classes shown are plasma membrane (grey), ER (green), mitochondria (orange), nucleus (purple), endosomal system (blue), and vesicles (red).

Source data

Extended Data Fig. 4 Evaluation metrics and comparison of manual and automatic hyperparameter tuning.

a, F1 scores on holdout blocks from four datasets comparing manual and automatic hyperparameter tuning. Data includes all results we collected using the manual comparison of thresholded predictions on whole cells, i.e., comparisons across iterations only as well as comparisons across the best iterations of different network types. For automatic validation scores equivalent queries were made against the database. b, F1 scores on holdout regions from four datasets comparing F1 score and Mean False Distance as the metric used for hyperparameter tuning. Data points are equivalent to those from a.

Source data

Extended Data Fig. 5 Effect of prediction refinements on evaluation metrics.

ac, F1 score (a), precision (b) and recall (c) on holdout blocks from four datasets before (raw) and after (refined) refinements described in the Supplementary Methods. Network type and iterations represented here are listed in Supplementary Table 1 and are optimized manually with a bias towards potential improvement through the refinement process.

Source data

Extended Data Fig. 6 Mitochondria overmerging corrections.

a, Whole cell, 3D rendering of mitochondria in jrc_hela-2 segmented using naive connected component analysis method. Scale bar is 4 μm. b, FIB-SEM and raw mitochondria predictions thresholded at 127 (d = 0 nm) for the boxed region in (a), shown for one 2D slice. c, Naive connected component segmentation of mitochondria for the region in (a), performed on smoothed, thresholded predictions at 127 and followed by size filtering and hole-filling. d, To alleviate overmerging of mitochondria, we smooth the predictions and perform watershed segmentation on all voxels greater than or equal to 127. Shown are the resultant watershed fragments. e, To create the improved mitochondria segmentations, we agglomerate adjacent fragments in d based on parameters that best optimize the resultant segmentations, as chosen by an expert user. f, Final whole-cell rendering of corrected mitochondria predictions. Scale bar is 4 μm.

Extended Data Fig. 7 Microtubule refinement.

a, 3D renderings of the ground truth (red) and reconstructed microtubules (cyan) in a selected 2 μm cube test block on jrc_hela-2. Note the close correlation between ground-truth-based versus automatically reconstructed microtubules. b, Comparison of the accuracy of MT reconstruction after refinement for four different cells, measured over two densely traced 2 μm cubes for each dataset. Accuracy is measured on individual edges, where an edge is correct if the edge connects two reconstruction vertices that are matched to the same ground truth microtubule track. c, d, Comparison of the baseline microtubule refinement and the method described in Eckstein et al.28. Shown is the accuracy in terms of topological errors on full tracks (c) and precision and recall on individual edges (d). Each column shows the accuracy of both methods, acquired via 6-fold cross validation over 4 ground truth annotation blocks, where we used two blocks for validation and the remaining two for testing for each run. Numbers above each column in (c) are the median value of the 6 cross-validation runs. eg, 2D FIB-SEM slice with ground truth and reconstructed microtubules in the plane, 3D renderings of the ground truth (red) and reconstructed microtubules (cyan) in selected 2 μm cube test blocks on jrc_hela-3 (e), jrc_jurkat-1 (f) and jrc_macrophage-3 (g). Plots show the topological errors normalized by ground-truth microtubule cable length for each cell respectively. See Supplementary Table 4 for a complete listing of evaluation results. Standard box plots are used showing the minima and maxima (whiskers), outliers (points), the first and third quartile (box), and median (line). For each dataset n = 4 samples over 6 experiments in 1 cell.

Source data

Extended Data Fig. 8 Measurements in holdout regions.

a, b, Surface area (a) and volume (b) deviations from the ground truth for sample organelles in the holdout regions. Analysis was performed on both raw predictions thresholded at 127 (dark shade), and refined predictions (light shade). To get some sense of reliability, we divide each holdout region in half in 3 dimensions resulting in 6 regions. The measurements within these 6 regions are shown (circles), as well as the measurements for the entire holdout region (bar).

Source data

Extended Data Fig. 9 Analysis and biological insight.

a, Relative volume occupied by each predicted organelle, per cell. MT volume only shown for jrc_hela-2. b, ER predictions in jrc_macrophage-2. Left panel is a 2D FIB-SEM slice with overlaid ER predictions in green, the middle panel shows a 3D rendering of the ER predictions, and the right panel shows the ER medial surface partitioned into planar and tubular structures and corresponding tubule thicknesses (colour bar). Scale bar is 500 nm. c, ER predictions in jrc_hela-2. Left panel is a 2D FIB-SEM slice with overlaid ER predictions in green and mitochondria predictions in orange (bottom), the middle panel shows a 3D rendering of the ER predictions and mitochondria predictions (bottom), and the right panel shows the ER medial surface partitioned into planes and tubes along with tubule thicknesses (colour bar). Also shown in the bottom right panel are the contact site regions (blue) where ER and mitochondria are within 10 nm of each other. Scale bars are 500 nm. d, Quantification of the peripheral ER curvature and surface area compared between jrc_hela-2 and jrc_macrophage-2. e, Quantification of the peripheral ER curvature at contact sites between peripheral ER and mitochondria, for jrc_hela-2 and jrc_macrophage-2.

Source data

Extended Data Fig. 10 Organelle contact sites, planarity and skeletons.

a, ER (reconstructed, green) and mitochondria (orange) dense regions of jrc_hela-2 chosen for comparison, with 2D FIB-SEM slice also displayed. b, 2D FIB-SEM slice shown in a displays ER predictions (green), mitochondria predictions (orange), naive contact sites (magenta) and refined contact sites (blue), which are subsets of the naive contact sites. Scale bar is 1 μm. c, 3D rendering of mitochondria (orange) and simple contact sites (magenta). d, 3D rendering of mitochondria (orange) and refined contact sites (blue). e, 3D rendering of refined ER segmentation in an example region of jrc_hela-2. Scale bar is 200 nm. f, Medial surface (black) produced from iterative topological thinning of the ER segmentation (grey). Scale bar is 200 nm. g, A planar metric (colour) is calculated for each voxel in the medial surface based on the ER’s Hessian matrix eigenvalues at that voxel; higher values correspond to more planar regions. Scale bar is 200 nm. h, The planarity metric medial surface in (c) is used to reconstruct a curvature-labelled ER which is thresholded at 0.6, above which voxels are considered planes (blue) and below which voxels are considered non-planar (red). Scale bar is 200 nm. i, Topological thinning is used to produce skeletons. Shown is a 3D rendering of an example mitochondria (grey), skeleton (red), pruned skeleton (blue), and longest shortest path (green) from jrc_hela-2. Scale bar is 1 μm. j, Unpruned skeleton used as a starting point. Scale bar is 1 μm. k, Repetitive pruning produced a final skeleton such that no remaining branch was shorter than 80 nm. Scale bar is 1 μm. l, Mitochondrial length and average radius were calculated using the longest shortest path within the pruned skeleton. Scale bar is 1 μm. See Supplementary Methods for in depth description.

Extended Data Fig. 11 Scaling predictions and CLEM auto-registration.

a, Comparison of networks trained with 4 nm and simulated 8 nm raw data of all samples. Each data point represents the F1 score (test performance) on one of the four holdout blocks, similar to Fig. 2b. b, Qualitative comparison of automated and manual registration for the region marked with the dashed box in c. PALM images show ER (magenta) and mitochondria (green). Landmarks were placed at corresponding points in the ER light channel and ER predictions of the electron microscopy image that were not used for automatic registration. This unbiased measurement enables us to measure errors in an unbiased way, with respect to the true underlying transformation, not only the "part" of the transformation that can be inferred from the mitochondria membrane channel. White glyphs show human-human error (vertical) and human-automatic error (horizontal). Scale bar is 2 μm. c, A single slice of the Jacobian determinant map for the transformation registering electron microscopy to PALM for jrc_cos7-11. Red (blue) indicates local increase (decrease) in volume. Dotted area shows the approximate location of cells. Scale bar is 10 μm. d, Histogram of Jacobian determinant over the whole volume. e, Error map showing differences for automatic registrations using PALM or SIM as the target image. Dotted area shows the approximate location of cells. Scale bar is 10 μm. f, Histogram of PALM versus SIM errors over the area where a cell is present (white dotted line in e). All statistics from a single cell in a single dataset as specified.

Source data

Supplementary information

41586_2021_3977_MOESM1_ESM.pdf

Supplementary Information This file contains Supplementary Note, Supplementary Methods, Supplementary Discussion and Supplementary References.

Reporting Summary

Peer Review File

Supplementary Tables

This file contains Supplementary Tables 1–5.

Supplementary Video 1

FIB-SEM, training data, and predictions. Showcase of the processing pipeline, using jrc_hela-2 as an example. We begin with the electron microscopy data. Then skilled annotators classify every voxel within a volume; shown here are 15 of these training blocks. These segmentations are fed into machine learning algorithms as training data. The prediction outputs from these algorithms are refined. Once the predicted, whole-cell segmentations are achieved, quantitative analytics of subcellular distributions, interactions, sizes and morphologies can be acquired as shown in Supplementary Video 2.

Supplementary Video 2

Three analysis examples. Three analysis examples in jrc_hela-2. The first example is from Fig. 3a, a microtubule contacting multiple different organelles. The second example is from Extended Data Fig. 9b, displaying the relationship between ER morphology and mitochondria contact sites. The third example is from Fig. 3c, showing the distribution of ribosomes bound to the ER.

Supplementary Video 3

CLEM registration. FIB-SEM and correlative light microscopy automatically registered using whole-cell mitochondria membrane predictions. Displayed are PALM images of mitochondria membrane marker Halo/JF525-TOMM20 and ER luminal marker mEmerald-ER3, predictions for mitochondria membrane and ER, as well as the corresponding (8 nm × 8 nm × 8 nm) FIB-SEM. A ‘warping’ from affine-only to the full-deformable transformation is also shown.

Source data

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Heinrich, L., Bennett, D., Ackerman, D. et al. Whole-cell organelle segmentation in volume electron microscopy. Nature 599, 141–146 (2021). https://doi.org/10.1038/s41586-021-03977-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-021-03977-3

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing