This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 per month
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Rent or buy this article
Get just this article for as long as you need it
$39.95
Prices may be subject to local taxes which are calculated during checkout

Data availability
Two datasets that used chunkflow in the image processing pipeline are publicly available at https://microns-explorer.org/ and https://flywire.ai/.
Code availability
Source code and documentation is available as Supplementary Software and online at https://github.com/seung-lab/chunkflow.
References
Caicedo, J. C. et al. Cytometry A 95, 952–965 (2019).
Lee, K. et al. Curr. Opin. Neurol. 55, 188–198 (2019).
Haberl, M. G. et al. Nat. Methods 15, 677–680 (2018).
Falk, T. et al. Nat. Methods 16, 67–70 (2019).
Bannon, D. et al. Nat. Methods 18, 43–45 (2021).
Paszke, A. et al. Pytorch: An imperative style, high-performance deep learning library. In Adv. Neural Inf. Process. Syst. 32, 8026–8037 (2019); https://proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf
Zlateski, A. & Seung, H.S. Compile-time optimized and statically scheduled N-D convnet primitives for multi-core and many-core (Xeon Phi) CPUs. In Proc. Int. Conf. Supercomputing 8, 1–10 (ACM, 2017); https://doi.org/10.1145/3079079.3079081
Popovych, S., Buniatyan, D., Zlateski, A., Li, K. & Seung, H.S. Pznet: efficient 3D convnet inference on manycore CPUs. In Advances in Computer Vision (eds. Popovych, S. et al.) 369–383 (Springer, 2019); https://doi.org/10.1007/978-3-030-17795-9_27
Kasthuri, N. et al. Cell 162, 648–661 (2015).
Lee, K., Zung, J., Li, P., Jain, V. & Seung, H.S. Preprint at https://arxiv.org/abs/1706.00120 (2017).
Acknowledgements
We would like to thank T. Macrina for realigning the somatosensory cortex dataset. We would also like to thank W. Wong for discussions and N. Kemnitz for cloud deployment help. We are grateful to Google for providing the technical support and computational resources, including early access to NVIDIA T4 GPUs on the Google Cloud Platform. We are grateful for technical assistance from Google, Amazon and Intel. These companies were not involved in the design of this study. This research was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC0005, NIH/NIMH (U01MH114824, U01MH117072, RF1MH117815), NIH/NINDS (U19NS104648, R01NS104926), NIH/NEI (R01EY027036), ARO (W911NF-12-1-0594) and the Mathers Foundation. The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC or the US Government.
Author information
Authors and Affiliations
Contributions
J.W. developed the chunkflow framework and performed experiments. W.M.S. developed CloudVolume and some other dependent packages. K.L. trained the convolutional net for boundary detection. J.W. and H.S.S. wrote the report with help from W.M.S. and K.L.
Corresponding author
Ethics declarations
Competing interests
H.S.S. declares financial interests in Certerra and Zetta AI.
Additional information
Peer review information Nature Methods thanks Pavel Tomancak and the other, anonymous reviewer(s) for their contribution to the peer review of this work.
Supplementary information
Supplementary Information
Supplementary Note, Supplementary Figs. 1–7 and Supplementary Table 1
Supplementary Software
Code and documentation for chunkflow
Rights and permissions
About this article
Cite this article
Wu, J., Silversmith, W.M., Lee, K. et al. Chunkflow: hybrid cloud processing of large 3D images by convolutional nets. Nat Methods 18, 328–330 (2021). https://doi.org/10.1038/s41592-021-01088-5
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41592-021-01088-5