Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Segmentation of neurons from fluorescence calcium recordings beyond real time

Abstract

Fluorescent genetically encoded calcium indicators and two-photon microscopy help understand brain function by generating large-scale in vivo recordings in multiple animal models. Automatic, fast and accurate active neuron segmentation is critical when processing these videos. Here we developed and characterized a novel method, Shallow U-Net Neuron Segmentation (SUNS), to quickly and accurately segment active neurons from two-photon fluorescence imaging videos. We used temporal filtering and whitening schemes to extract temporal features associated with active neurons, and used a compact shallow U-Net to extract spatial features of neurons. Our method was both more accurate and an order of magnitude faster than state-of-the-art techniques when processing multiple datasets acquired by independent experimental groups; the difference in accuracy was enlarged when processing datasets containing few manually marked ground truths. We also developed an online version, potentially enabling real-time feedback neuroscience experiments.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Schematic for the proposed fast neuron segmentation algorithm based on a shallow U-Net.
Fig. 2: SUNS outperformed existing neuron segmentation algorithms in accuracy and speed on the ABO dataset.
Fig. 3: SUNS outperformed existing neuron segmentation algorithms in accuracy and speed when processing a variety of datasets.
Fig. 4: SUNS online outperformed CaImAn online in accuracy and speed on the ABO dataset.

Similar content being viewed by others

Data availability

The trained network weights, and the optimal hyperparameters can be accessed at https://github.com/YijunBao/SUNS_paper_reproduction/tree/main/paper_reproduction/training%20results. The output masks of all neuron segmentation algorithms can be accessed at https://github.com/YijunBao/SUNS_paper_reproduction/tree/main/paper_reproduction/output%20masks%20all%20methods. We used three public datasets to evaluate the performance of SUNS and other neuron segmentation algorithms. We used the videos of ABO dataset from https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS, and we used the corresponding manual labels created from our previous work, https://github.com/soltanianzadeh/STNeuroNet/tree/master/Markings/ABO. We used the Neurofinder dataset from https://github.com/codeneuro/neurofinder, and we used the corresponding manual labels created from our previous work, https://github.com/soltanianzadeh/STNeuroNet/tree/master/Markings/Neurofinder. We used the videos and manual labels of CaImAn dataset from https://zenodo.org/record/1659149. A more detailed description of how we used these dataset can be found in the readme of https://github.com/YijunBao/SUNS_paper_reproduction/tree/main/paper_reproduction.

Code availability

Code for SUNS can be accessed at https://github.com/YijunBao/Shallow-UNet-Neuron-Segmentation_SUNS51. The version to reproduce the results in this paper can be accessed at https://github.com/YijunBao/SUNS_paper_reproduction52.

References

  1. Akerboom, J. et al. Genetically encoded calcium indicators for multi-color neural activity imaging and combination with optogenetics. Front. Mol. Neurosci. 6, 2 (2013).

    Article  Google Scholar 

  2. Chen, T.-W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499, 295–300 (2013).

    Article  Google Scholar 

  3. Dana, H. et al. High-performance calcium sensors for imaging activity in neuronal populations and microcompartments. Nat. Methods 16, 649–657 (2019).

    Article  Google Scholar 

  4. Helmchen, F. & Denk, W. Deep tissue two-photon microscopy. Nat. Methods 2, 932–940 (2005).

    Article  Google Scholar 

  5. Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019).

    Article  Google Scholar 

  6. Grewe, B. F. et al. High-speed in vivo calcium imaging reveals neuronal network activity with near-millisecond precision. Nat. Methods 7, 399–405 (2010).

    Article  Google Scholar 

  7. Soltanian-Zadeh, S. et al. Fast and robust active neuron segmentation in two-photon calcium imaging using spatiotemporal deep learning. Proc. Natl Acad. Sci. USA 116, 8554–8563 (2019).

    Article  Google Scholar 

  8. Pnevmatikakis, E. A. Analysis pipelines for calcium imaging data. Curr. Opin. Neurobiol. 55, 15–21 (2019).

    Article  Google Scholar 

  9. Klibisz, A. et al. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (eds. Cardoso, J. et al.) 285–293 (Springer, 2017).

  10. Gao, S. Automated neuron detection. GitHub https://github.com/iamshang1/Projects/tree/master/Advanced_ML/Neuron_Detection (2016).

  11. Shen, S. P. et al. Automatic cell segmentation by adaptive thresholding (ACSAT) for large-scale calcium imaging datasets. eNeuro 5, ENEURO.0056-18.2018 (2018).

  12. Spaen, Q. et al. HNCcorr: a novel combinatorial approach for cell identification in calcium-imaging movies. eNeuro 6, ENEURO.0304-18.2019 (2019).

  13. Kirschbaum, E., Bailoni, A. & Hamprecht, F. A. DISCo for the CIA: deep learning, instance segmentation, and correlations for calcium imaging analysis. In Medical Image Computing and Computer Assisted Intervention, (eds. Martel, A. L. et al.) 151–162 (Springer, 2020)

  14. Apthorpe, N. J. et al. Automatic neuron detection in calcium imaging data using convolutional networks. Adv. Neural Inf. Process Syst. 29, 3278–3286 (2016).

    Google Scholar 

  15. Mukamel, E. A., Nimmerjahn, A. & Schnitzer, M. J. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron 63, 747–760 (2009).

    Article  Google Scholar 

  16. Maruyama, R. et al. Detecting cells using non-negative matrix factorization on calcium imaging data. Neural Netw. 55, 11–19 (2014).

    Article  Google Scholar 

  17. Pnevmatikakis, EftychiosA. et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron 89, 285–299 (2016).

    Article  Google Scholar 

  18. Pachitariu, M. et al. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. Preprint at biorXiv https://doi.org/10.1101/061507 (2017).

  19. Petersen, A., Simon, N. & Witten, D. SCALPEL: extracting neurons from calcium imaging data. Ann. Appl. Stat. 12, 2430–2456 (2018).

    Article  MathSciNet  Google Scholar 

  20. Giovannucci, A. et al. CaImAn an open source tool for scalable calcium imaging data analysis. eLife 8, e38173 (2019).

    Article  Google Scholar 

  21. Sitaram, R. et al. Closed-loop brain training: the science of neurofeedback. Nat. Rev. Neurosci. 18, 86–100 (2017).

    Article  Google Scholar 

  22. Kearney, M. G. et al. Discrete evaluative and premotor circuits enable vocal learning in songbirds. Neuron 104, 559–575.e6 (2019).

    Article  Google Scholar 

  23. Carrillo-Reid, L. et al. Controlling visually guided behavior by holographic recalling of cortical ensembles. Cell 178, 447–457.e5 (2019).

    Article  Google Scholar 

  24. Rickgauer, J. P., Deisseroth, K. & Tank, D. W. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields. Nat. Neurosci. 17, 1816–1824 (2014).

    Article  Google Scholar 

  25. Packer, A. M., Russell, L. E., Dalgleish, H. W. P. & Häusser, M. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nat. Methods 12, 140–146 (2015).

    Article  Google Scholar 

  26. Zhang, Z. et al. Closed-loop all-optical interrogation of neural circuits in vivo. Nat. Methods 15, 1037–1040 (2018).

    Article  Google Scholar 

  27. Giovannucci, A. et al. OnACID: online analysis of calcium imaging data in real time. In Advances in Neural Information Processing Systems (eds. Guyon, I. et al.) (Curran Associates, 2017).

  28. Wilt, B. A., James, E. F. & Mark, J. S. Photon shot noise limits on optical detection of neuronal spikes and estimation of spike timing. Biophys. J. 104, 51–62 (2013).

    Article  Google Scholar 

  29. Jiang, R. & Crookes, D. Shallow unorganized neural networks using smart neuron model for visual perception. IEEE Access. 7, 152701–152714 (2019).

    Article  Google Scholar 

  30. Ba, J. & Caruana, R. Do deep nets really need to be deep? Adv. Neural Inf. Process. Syst. (2014).

  31. Lei, F., Liu, X., Dai, Q. & Ling, B. W.-K. Shallow convolutional neural network for image classification. SN Appl. Sci. 2, 97 (2019).

    Article  Google Scholar 

  32. Yu, S. et al. A shallow convolutional neural network for blind image sharpness assessment. PLoS One 12, e0176632 (2017).

    Article  Google Scholar 

  33. Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation (Springer, 2015).

  34. Code Neurofinder (CodeNeuro, 2019); http://neurofinder.codeneuro.org/

  35. Arac, A. et al. DeepBehavior: a deep learning toolbox for automated analysis of animal and human behavior imaging data. Front. Syst. Neurosci. 13, 20 (2019).

    Article  Google Scholar 

  36. Shen, D., Wu, G. & Suk, H.-I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017).

    Article  Google Scholar 

  37. Zhou, P. et al. Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data. eLife. 7, e28728 (2018).

    Article  Google Scholar 

  38. Meyer, F. Topographic distance and watershed lines. Signal Process. 38, 113–125 (1994).

    Article  Google Scholar 

  39. Pnevmatikakis, E. A. & Giovannucci, A. NoRMCorre: an online algorithm for piecewise rigid motion correction of calcium imaging data. J. Neurosci. Methods 291, 83–94 (2017).

    Article  Google Scholar 

  40. Keemink, S. W. et al. FISSA: a neuropil decontamination toolbox for calcium imaging signals. Sci. Rep. 8, 3493 (2018).

    Article  Google Scholar 

  41. Mitani, A. & Komiyama, T. Real-time processing of two-photon calcium imaging data including lateral motion artifact correction. Front. Neuroinform. 12, 98 (2018).

    Article  Google Scholar 

  42. Frankle, J. & Carbin, M. The lottery ticket hypothesis: finding sparse, trainable neural networks. In International Conference on Learning Representations (ICLR, 2019).

  43. Yang, W. & Lihong, X. Lightweight compressed depth neural network for tomato disease diagnosis. Proc. SPIE (2020).

  44. Oppenheim, A., Schafer, R. & Stockham, T. Nonlinear filtering of multiplied and convolved signals. IEEE Trans. Audio Electroacoust. 16, 437–466 (1968).

    Article  Google Scholar 

  45. Szymanska, A. F. et al. Accurate detection of low signal-to-noise ratio neuronal calcium transient waves using a matched filter. J. Neurosci. Methods 259, 1–12 (2016).

    Article  Google Scholar 

  46. Milletari, F., Navab, N. & Ahmadi, S. V-Net: fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision 565–571 (3DV, 2016).

  47. Lin, T.-Y. et al. Focal loss for dense object detection. In Proc. IEEE International Conference on Computer Vision 2980–2988 (IEEE, 2017).

  48. de Vries, S. E. J. et al. A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nat. Neurosci. 23, 138–151 (2020).

    Article  Google Scholar 

  49. Gilman, J. P., Medalla, M. & Luebke, J. I. Area-specific features of pyramidal neurons—a comparative study in mouse and rhesus monkey. Cerebral Cortex. 27, 2078–2094 (2016).

    Google Scholar 

  50. Ballesteros-Yáñez, I. et al. Alterations of cortical pyramidal neurons in mice lacking high-affinity nicotinic receptors. Proc. Natl Acad. Sci. USA 107, 11567–11572 (2010).

    Article  Google Scholar 

  51. Bao, Y. YijunBao/Shallow-UNet-Neuron-Segmentation_SUNS. Zenodo https://doi.org/10.5281/zenodo.4638171 (2021).

  52. Bao, Y. YijunBao/SUNS_paper_reproduction. Zenodo https://doi.org/10.5281/zenodo.4638135 (2021).

Download references

Acknowledgements

We acknowledge support from the BRAIN Initiative (NIH 1UF1-NS107678, NSF 3332147), the NIH New Innovator Program (1DP2-NS111505), the Beckman Young Investigator Program, the Sloan Fellowship and the Vallee Young Investigator Program received by Y.G. We acknowledge Z. Zhu for early characterization of the SUNS.

Author information

Authors and Affiliations

Authors

Contributions

Y.G. conceived and designed the project. Y.B. and Y.G. implemented the code for SUNS. Y.B. and S.S.-Z. implemented the code for other algorithms for comparison. Y.B. ran the experiment. Y.B., S.S.-Z., S.F. and Y.G. analysed the data. Y.B., S.S.-Z., S.F. and Y.G. wrote the paper.

Corresponding authors

Correspondence to Yijun Bao or Yiyang Gong.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Machine Intelligence thanks Xue Han and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 The average calcium response formed the temporal filter kernel.

We determined the temporal matched filter kernel by averaging calcium transients within a moderate SNR range; these transients likely represent the temporal response to single action potentials2. a, Example data show all background-subtracted fluorescence calcium transients of all GT neurons in all videos in the ABO 275 μm dataset that showed peak SNR (pSNR) in the regime 6 < pSNR < 8 (gray). We minimized crosstalk from neighboring neurons by excluding transients during time periods when neighboring neurons also had transients. We normalized all transients such that their peak values were unity, and then averaged these normalized transients into an averaged spike trace (red). We used the portion of the average spike trace above e–1 (blue dashed line) as the final template kernel. b, When analyzing performance on the ABO 275 μm dataset through ten-fold leave-one-out cross-validation, using the temporal kernel determined in (a) within our temporal filter scheme achieved significantly higher F1 score than not using a temporal filter or using an unmatched filter (*P < 0.05, **P < 0.005; two-sided Wilcoxon signed-rank test, n = 10 videos) and achieved a slightly higher F1 score than using a single exponentially decaying kernel (P = 0.77; two-sided Wilcoxon signed-rank test, n = 10 videos). Error bars are s.d. The gray dots represent scores for the test data for each round of cross-validation. The unmatched filter was a moving-average filter over 60 frames. c-d, are analogous to (a-b), but for the Neurofinder dataset. We determined the filter kernel using videos 04.01 and 04.01.test.

Extended Data Fig. 2 The complexity of the CNN architecture controlled the tradeoff between speed and accuracy.

We explored multiple potential CNN architectures to optimize performance. a-d, Various CNN architectures having depths of (a) two, (b) three, (c) four, or (d) five. For the three-depth architecture, we also tested different numbers of skip connections, ReLU (Rectified Linear Unit) instead of ELU (Exponential Linear Unit) as the activation function, and separable Conv2D instead of Conv2D in the encoding path. The dense five-depth model mimicked the model used in UNet2Ds9. The legend ‘0/ni + ni’ represents whether the skip connection was used (ni + ni) or not used (0 + ni). e, The F1 score and processing speed of SUNS using various CNN architectures when analyzing the ABO 275 μm dataset through ten-fold leave-one-out cross-validation. The right panel zooms in on the rectangular region in the left panel. Error bars are s.d. The legend (n1, n2, …, nk) describes architectures with k-depth and ni channels at the ith depth. We determined that the three-depth model, (4,8,16), using one skip connection at the shallowest layer, ELU, and full Conv2D (Fig. 1c), had a good trade-off between speed and accuracy; we used this architecture as the SUNS architecture throughout the paper. One important drawback of the ReLU activation function was its occasional (20% of the time) failure during training, compared to negligible failure levels for the ELU activation function.

Extended Data Fig. 3 The F1 score of SUNS was robust to moderate variation of training and post-processing parameters.

We tested if the accuracy of SUNS when analyzing the ABO 275 μm dataset within the ten-fold leave-one-out cross-validation relied on intricate tuning of the algorithm’s hyperparameters. The evaluated training parameters included (a) the threshold of the SNR video (thSNR) and (b) the training batch size. The evaluated post-processing parameters included (c) the threshold of probability map (thprob), (d) the minimum neuron area (tharea), (e) the threshold of COM distance (thCOM), and (f) the minimum number of consecutive frames (thframe). The solid blue lines are the average F1 scores, and the shaded regions are mean ± one s.d. When evaluating the post-processing parameters in (c-f), we fixed each parameter under investigation at the given values and simultaneously optimized the F1 score over the other parameters. Variations in these hyperparameters produced only small variations in the F1 performance. The orange lines show the F1 score (solid) ± one s.d. (dashed) when we optimized all four post-processing parameters simultaneously. The similarity between the F1 scores on the blue lines and the scores on the orange lines suggest that optimizing for three or four parameters simultaneously achieved similar optimized performance. Moreover, the relatively consistent F1 scores on the blue lines suggest that our algorithm did not rely on intricate hyperparameter tuning.

Extended Data Fig. 4 The performance of SUNS was better than that of other methods in the presence of intensity noise or motion artifacts.

The (a, d) recall, (b, e) precision, and (c, f) F1 score of all the (a-c) batch and (d-f) online segmentation algorithms in the presence of increasing intensity noise. The test dataset was the ABO 275 μm data with added random noise. The relative noise strength was represented by the ratio of the standard deviation of the random noise amplitude to the mean fluorescence intensity. As expected, the F1 scores of all methods decreased as the noise amplitude grew. The F1 of SUNS was greater than the F1’s of all other methods at all noise intensities. g-l, are in the same format of (a-f), but show the performance with the presence of increasing motion artifacts. The motion artifacts strength was represented by the standard deviation of the random movement amplitude (unit: pixels). As expected, the F1 scores of all methods decreased as the motion artifacts became stronger. The F1 of SUNS was greater than the F1’s of all other methods at all motion amplitudes. STNeuroNet and CaImAn batch were the most sensitive to strong motion artifacts, likely because they rely on accurate 3D spatiotemporal structures of the video. On the contrary, SUNS relied more on the 2D spatial structure, so it retained the accuracy better when spatial structures changed position over different frames.

Extended Data Fig. 5 SUNS accurately mapped the spatial extent of each neuron even if the spatial footprints of neighboring cells overlapped.

SUNS segmented active neurons within each individual frame, and then accurately collected and merged the instances belonging to the same neurons. We selected two example pairs of overlapping neurons from the ABO video 539670003 identified by SUNS, and showed their traces and instances when they were activated independently. a, The SNR images of the region surrounding the selected neurons. The left image is the maximum projection of the SNR video over the entire recording time, which shows the two neurons were active and overlapping. The right images are single-frame SNR images at two different time points, each at the peak of a fluorescence transient where only one of the two neurons was active. The segmentation of each neuron generated by SUNS is shown as a contour with a different color. The scale bar is 3 μm. b, The temporal SNR traces of the selected neurons, matched to the colors of their contours in (a). Because the pairs of neurons overlapped, their fluorescence traces displayed substantial crosstalk. The dash markers above each trace show the active periods of each neuron determined by SUNS. The colored triangles below each trace indicate the manually-selected time of the single-frame images shown in (a). c-d, are parallel to (a-b), but for a different overlapping neuron pair. e, We quantified the ability to find overlapped neurons for each segmentation algorithm using the recall score. We divided the ground truth neurons in all the ABO videos into two groups: neurons without and with overlap with other neurons. We then computed the recall scores for both groups. The recall of SUNS on spatially overlapping neurons was not significantly lower (and was numerically higher) than the recall of SUNS on non-spatially overlapping neurons (P > 0.8, one-sided Wilcoxon rank-sum test, n = 10 videos; n.s.l. – not significantly lower). Therefore, the performance of SUNS on overlapped neurons was at least equally good as the performance of SUNS on non-overlapped neurons. Moreover, the recall scores of SUNS in both groups were comparable to or significantly higher than that of other methods in those groups (**P < 0.005, n.s. – not significant; two-sided Wilcoxon signed-rank test, n = 10 videos; error bars are s.d.). The gray dots represent the scores on the test data for each round of cross-validation.

Extended Data Fig. 6 Each pre-processing step and the CNN contributed to the accuracy of SUNS at the cost of lower speed.

We evaluated the contribution of each pre-processing option (spatial filtering, temporal filtering, and SNR normalization) and the CNN option to SUNS. The reference algorithm (SUNS) used all options except spatial filtering. We compared the performance of this reference algorithm to the performance with additional spatial filtering (optional SF), without temporal filtering (no TF), without SNR normalization (no SNR), and without the CNN (no CNN) when analyzing the ABO 275 μm dataset through ten-fold leave-one-out cross-validation. a, The recall, precision, and F1 score of these variants. The temporal filtering, SNR normalization, and CNN each significantly contributed to the overall accuracy, but the impact of spatial filtering was not significant (*P < 0.05, **P < 0.005, n.s. - not significant; two-sided Wilcoxon signed-rank test, n = 10 videos; error bars are s.d.). The gray dots represent the scores on the test data for each round of cross-validation. b, The speed and F1 score of these variants. Eliminating temporal filtering or the CNN significantly increased the speed, while adding spatial filtering or eliminating SNR normalization significantly lowered the speed (**P < 0.005; two-sided Wilcoxon signed-rank test, n = 10 videos; error bars are s.d.). The light color dots represent F1 scores and speeds for the test data for each round of cross-validation. The execution of SNR normalization was fast (~0.07 ms/frame). However, eliminating SNR normalization led to a much lower optimal thprob, and thus increased the number of active pixels and decreased precision. In addition, ‘no SNR’ had lower speed than the complete SUNS algorithm due to the increased post-processing computation workload for managing the additional active pixels and regions.

Extended Data Fig. 7 The recall, precision, and F1 score of SUNS were superior to that of other methods on a variety of datasets.

a, Training on one ABO 275 μm video and testing on nine ABO 275 μm videos (each data point is the average over each set of nine test videos, n = 10); b, Training on ten ABO 275 μm videos and testing on ten ABO 175 μm videos (n = 10); c, Training on one Neurofinder video and testing on one paired Neurofinder video (n = 12); d, Training on three-quarters of one CaImAn video and testing on the remaining quarter of the same CaImAn video (n = 16). The F1 scores of SUNS were mostly significantly higher than the F1 scores of other methods (*P < 0.05, **P < 0.005, ***P < 0.001, n.s. - not significant; two-sided Wilcoxon signed-rank test; error bars are s.d.). The gray dots represent the individual scores for each round of cross-validation.

Extended Data Fig. 8 SUNS online outperformed CaImAn Online in accuracy and speed when processing a variety of datasets.

a,e, Training on one ABO 275 μm video and testing on nine ABO 275 μm videos (each data point is the average over each set of nine test videos, n = 10); b, f, Training on ten ABO 275 μm videos and testing on ten ABO 175 μm videos (n = 10); c, g, Training on one Neurofinder video and testing on one paired Neurofinder video (n = 12); d, h, Training on three-quarters of one CaImAn video and testing on the remaining quarter of the same CaImAn video (n = 16). The F1 score and processing speed of SUNS online were significantly higher than the F1 score and speed of CaImAn Online (**P < 0.005, ***P < 0.001; two-sided Wilcoxon signed-rank test; error bars are s.d.). The gray dots in (a-d) represent individual scores for each round of cross-validation. The light color dots in (e-g) represent F1 scores and speeds for the test data for each round of cross-validation. The light color markers in (h) represent F1 scores and speeds for the test data for each round of cross-validation performed on different CaImAn videos. We updated the baseline and noise regularly after initialization for the Neurofinder dataset, but did not do so for other datasets.

Extended Data Fig. 9 Changing the frequency of updating the neuron masks modulated trade-offs between SUNS online’s response time to new neurons and SUNS online’s performance metrics.

The (a-c) F1 score and (d-f) speed of SUNS online increased as the number of frames per update (nmerge) increased for the (a, d) ABO 275 μm, (b, e) Neurofinder, and (c, f) CaImAn datasets. The solid line is the average, and the shading is one s.d. from the average (n = 10, 12, and 16 cross-validation iterations for the three datasets). In (a-c), the green lines show the F1 score (solid) ± one s.d. (dashed) of SUNS batch. The F1 score and speed generally increased as nmerge increased. For example, the F1 score and speed when using nmerge = 500 were respectively higher than the F1 score and speed when using nmerge = 20, and some of the differences were significant (*P < 0.05, **P < 0.005, ***P < 0.001, n.s. - not significant; two-sided Wilcoxon signed-rank test; n = 10, 12, and 16, respectively). We updated the baseline and noise regularly after initialization for the Neurofinder dataset, but did not do so for other datasets. The nmerge was inversely proportional to the update frequency or the responsiveness of SUNS online to the appearance of new neurons. A trade-off exists between this responsiveness and the accuracy and speed of SUNS online. At the cost of less responsiveness, a higher nmerge allowed the accumulation of temporal information and improved the accuracy of neuron segmentations. Likewise, a higher nmerge improved the speed because it reduced the occurrence of computations for aggregating neurons.

Extended Data Fig. 10 Updating the baseline and noise after initialization increased the accuracy of SUNS online at the cost of lower speed.

We compared the F1 score and speed of SUNS online with or without baseline and noise update for the (a) ABO 275 μm, (b) Neurofinder, and (c) CaImAn datasets. The F1 scores with baseline and noise update were generally higher, but the speeds were slower (*P < 0.05, **P < 0.005, ***P < 0.001, n.s. - not significant; two-sided Wilcoxon signed-rank test; error bars are s.d.). The light color dots represent F1 scores and speeds for the test data for each round of cross-validation. The improvement in the F1 score was larger as the baseline fluctuation becomes more significant. d, Example processing time per frame of SUNS online with baseline and noise update on Neurofinder video 02.00. The lower inset zooms in on the data from the red box. The upper inset is the distribution of processing time per frame. The processing time per frame was consistently faster than the microscope recording rate (125 ms/frame). The first few frames after initialization were faster than the following frames, because the baseline and noise update was not performed in these frames.

Supplementary information

Supplementary Information

Supplementary Figs. 1–14, Tables 1–10 and Methods.

Reporting Summary

Supplementary Video 1

Demonstration of how SUNS online gradually found new neurons on an example raw video. The example video is selected frames in the fourth quadrant of the video YST from the CaImAn dataset. We showed the results of the SUNS online without the ‘tracking’ option enabled (left) and with the ‘tracking’ option enabled (right). The red contours were the segmented neurons from all frames before the current frame. We updated the identified neuron contours every second (ten frames), so the red neuron contours appeared with some delay after the neurons’ initial activation. The green contours in the right panels were the neurons found in previous frames and appeared as active in the current frames. Such tracked activity enables follow-up analysis of animal behaviours or brain network structures in real-time feedback neuroscience experiments.

Supplementary Video 2

Demonstration of how SUNS online gradually found new neurons on an example SNR video. The example video is selected frames in the fourth quadrant of the video YST from the CaImAn dataset after pre-processing and conversion to an SNR video. We showed the results of the SUNS online without the ‘tracking’ option enabled (left) and with the ‘tracking’ option enabled (right). The red contours were the segmented neurons from all frames before the current frame. We updated the identified neuron contours every second (ten frames), so the red neuron contours appeared with some delay after the neurons’ initial activation. The green contours in the right panels were the neurons found in previous frames and appeared as active in the current frames. Such tracked activity enables follow-up analysis of animal behaviours or brain network structures in real-time feedback neuroscience experiments.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bao, Y., Soltanian-Zadeh, S., Farsiu, S. et al. Segmentation of neurons from fluorescence calcium recordings beyond real time. Nat Mach Intell 3, 590–600 (2021). https://doi.org/10.1038/s42256-021-00342-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-021-00342-x

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing