Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

A deep neural network for real-time optoacoustic image reconstruction with adjustable speed of sound

A preprint version of the article is available at arXiv.

Abstract

Multispectral optoacoustic tomography is a high-resolution functional imaging modality that can non-invasively access a broad range of pathophysiological phenomena. Real-time imaging would enable translation of multispectral optoacoustic tomography into clinical imaging, visualize dynamic pathophysiological changes associated with disease progression and enable in situ diagnoses. Model-based reconstruction affords state-of-the-art optoacoustic images but cannot be used for real-time imaging. On the other hand, deep learning enables fast reconstruction of optoacoustic images, but the lack of experimental ground-truth training data leads to reduced image quality for in vivo scans. In this work we achieve accurate optoacoustic image reconstruction in 31 ms per image for arbitrary (experimental) input data by expressing model-based reconstruction with a deep neural network. The proposed deep learning framework, DeepMB, generalizes to experimental test data through training on optoacoustic signals synthesized from real-world images and ground truth optoacoustic images generated by model-based reconstruction. Based on qualitative and quantitative evaluation on a diverse dataset of in vivo images, we show that DeepMB reconstructs images approximately 1,000-times faster than the iterative model-based reference method while affording near-identical image qualities. Accurate and real-time image reconstructions with DeepMB can enable full access to the high-resolution and multispectral contrast of handheld optoacoustic tomography, thus adoption into clinical routines.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: DeepMB pipeline.
Fig. 2: Examples from the in vivo test dataset for different anatomical locations.
Fig. 3: Data residual norms of optoacoustic images from DeepMB, MB and BP reconstructions.
Fig. 4: Unmixing of a representative multispectral breast scan for DeepMB, MB and BP.

Similar content being viewed by others

Data availability

In vivo data from two of the six scanned volunteers, the trained DeepMB model used in this work, and a download link for Pascal VOC 2012 dataset38 used to synthesize training data for DeepMB are provided along with the source code on Github (https://github.com/juestellab/deepmb)61. In vivo data from the other four scanned volunteers cannot be shared due to privacy and consent restrictions.

Code availability

The source code for DeepMB is publicly available on GitHub (https://github.com/juestellab/deepmb)61.

References

  1. Ntziachristos, V. & Razansky, D. Molecular imaging by means of multispectral optoacoustic tomography (MSOT). Chem. Rev. 110, 2783–2794 (2010).

    Article  Google Scholar 

  2. Diot, G. et al. Multispectral optoacoustic tomography (MSOT) of human breast cancer. Clin. Cancer Res. 23, 6912–6922 (2017).

    Article  Google Scholar 

  3. Knieling, F. et al. Multispectral optoacoustic tomography for assessment of Crohn’s disease activity. N. Engl. J. Med. 376, 1292–1294 (2017).

    Article  Google Scholar 

  4. Karlas, A. et al. Multispectral optoacoustic tomography of muscle perfusion and oxygenation under arterial and venous occlusion: a human pilot study. J. Biophoton. 13, e201960169 (2020).

    Article  Google Scholar 

  5. Dehner, C., Olefir, I., Chowdhury, K. B., Justel, D. & Ntziachristos, V. Deep-learning-based electrical noise removal enables high spectral optoacoustic contrast in deep tissue. IEEE Trans. Med. Imaging 41, 3182–3193 (2022).

    Article  Google Scholar 

  6. Kukacka, J. et al. Image processing improvements afford second-generation handheld optoacoustic imaging of breast cancer patients. Photoacoustics 26, 100343 (2022).

    Article  Google Scholar 

  7. Jüstel, D. et al. Spotlight on nerves: portable multispectral optoacoustic imaging of peripheral nerve vascularization and morphology. Adv. Sci. 10, 2301322 (2023).

    Article  Google Scholar 

  8. Regensburger, A. P. et al. Detection of collagens by multispectral optoacoustic tomography as an imaging biomarker for Duchenne muscular dystrophy. Nat. Med. 25, 1905–1915 (2019).

    Article  Google Scholar 

  9. Dima, A. & Ntziachristos, V. Non-invasive carotid imaging using optoacoustic tomography. Opt. Express 20, 25044–25057 (2012).

    Article  Google Scholar 

  10. Taruttis, A. & Ntziachristos, V. Advances in real-time multispectral optoacoustic imaging and its applications. Nat. Photon. 9, 219–227 (2015).

    Article  Google Scholar 

  11. Ivankovic, I., Mercep, E., Schmedt, C. G., Dean-Ben, X. L. & Razansky, D. Real-time volumetric assessment of the human carotid artery: handheld multispectral optoacoustic tomography. Radiology 291, 45–50 (2019).

    Article  Google Scholar 

  12. Sethuraman, S., Aglyamov, S. R., Amirian, J. H., Smalling, R. W. & Emelianov, S. Y. Intravascular photoacoustic imaging using an IVUS imaging catheter. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 54, 978–986 (2007).

    Article  Google Scholar 

  13. Yang, J. M. et al. Photoacoustic endoscopy. Opt. Lett. 34, 1591–1593 (2009).

    Article  Google Scholar 

  14. Xu, M. & Wang, L. V. Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E 71, 016706 (2005).

    Article  Google Scholar 

  15. Chowdhury, K. B., Prakash, J., Karlas, A., Jüstel, D. & Ntziachristos, V. A synthetic total impulse response characterization method for correction of hand-held optoacoustic images. IEEE Trans. Med. Imaging 39, 3218–3230 (2020).

    Article  Google Scholar 

  16. Chowdhury, K. B., Bader, M., Dehner, C., Justel, D. & Ntziachristos, V. Individual transducer impulse response characterization method to improve image quality of array-based handheld optoacoustic tomography. Opt. Lett. 46, 1–4 (2021).

    Article  Google Scholar 

  17. Ding, L., Dean-Ben, X. L. & Razansky, D. Real-time model-based inversion in cross-sectional optoacoustic tomography. IEEE Trans. Med. Imaging 35, 1883–1891 (2016).

    Article  Google Scholar 

  18. Jin, K. H., McCann, M. T., Froustey, E. & Unser, M. Deep convolutional neural network for Inverse problems in imaging. IEEE Trans. Image Process. 26, 4509–4522 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R. & Rosen, M. S. Image reconstruction by domain-transform manifold learning. Nature 555, 487–492 (2018).

    Article  Google Scholar 

  20. Ongie, G. et al. Deep learning techniques for inverse problems in imaging. IEEE J. Sel. Areas Information Theory 1, 39–56 (2020).

    Article  Google Scholar 

  21. Lucas, A., Iliadis, M., Molina, R. & Katsaggelos, A. K. Using deep neural networks for inverse problems in imaging: beyond analytical methods. IEEE Signal Process Mag. 35, 20–36 (2018).

    Article  Google Scholar 

  22. Gröhl, J., Schellenberg, M., Dreher, K. & Maier-Hein, L. Deep learning for biomedical photoacoustic imaging: a review. Photoacoustics 22, 100241 (2021).

    Article  Google Scholar 

  23. Hauptmann, A. & Cox, B. Deep learning in photoacoustic tomography: current approaches and future directions. J. Biomed. Opt. 25, 112903 (2020).

    Article  Google Scholar 

  24. Reiter, A. & Bell, M. A. L. A machine learning approach to identifying point source locations in photoacoustic data. In Photons Plus Ultrasound: Imaging and Sensing 100643J (SPIE, 2017).

  25. Aggarwal, H. K., Mani, M. P. & Jacob, M. MoDL: model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging 38, 394–405 (2019).

    Article  Google Scholar 

  26. Liu, J. et al. SGD-Net: efficient model-based deep learning with theoretical guarantees. IEEE Trans. Comput. Imaging 7, 598–610 (2021).

    Article  MathSciNet  Google Scholar 

  27. Genzel, M., Macdonald, J. & Marz, M. Solving inverse problems with deep neural networks—robustness Included. IEEE Trans. Pattern Anal. Mach. Intell. 45, 1119–1134 (2022).

  28. Schlemper, J., Caballero, J., Hajnal, J. V., Price, A. N. & Rueckert, D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37, 491–503 (2018).

    Article  Google Scholar 

  29. Hammernik, K. et al. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 79, 3055–3071 (2018).

    Article  Google Scholar 

  30. Kim, M., Jeng, G. S., Pelivanov, I. & O’Donnell, M. Deep-learning image reconstruction for real-time photoacoustic system. IEEE Trans. Med. Imaging 39, 3379–3390 (2020).

    Article  Google Scholar 

  31. Lan, H., Jiang, D., Yang, C., Gao, F. & Gao, F. Y-Net: hybrid deep learning image reconstruction for photoacoustic tomography in vivo. Photoacoustics 20, 100197 (2020).

    Article  Google Scholar 

  32. Waibel, D. et al. Reconstruction of initial pressure from limited view photoacoustic images using deep learning. In Photons Plus Ultrasound: Imaging and Sensing 104942S (SPIE, 2018).

  33. Feng, J. et al. End-to-end Res-Unet based reconstruction algorithm for photoacoustic imaging. Biomed. Opt. Express 11, 5321–5340 (2020).

    Article  Google Scholar 

  34. Tong, T. et al. Domain transform network for photoacoustic tomography from limited-view and sparsely sampled data. Photoacoustics 19, 100190 (2020).

    Article  Google Scholar 

  35. Guan, S., Khan, A. A., Sikdar, S. & Chitnis, P. V. Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning. Sci. Rep. 10, 8510 (2020).

    Article  Google Scholar 

  36. Guo, M., Lan, H., Yang, C., Liu, J. & Gao, F. AS-Net: fast photoacoustic reconstruction with multi-feature fusion from sparse data. IEEE Trans. Comput. Imaging 8, 215–223 (2022).

    Article  Google Scholar 

  37. Hauptmann, A. et al. Model-based learning for accelerated, limited-view 3-D photoacoustic tomography. IEEE Trans. Med. Imaging 37, 1382–1393 (2018).

    Article  Google Scholar 

  38. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. & Zisserman, A. The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vision 88, 303–338 (2010).

    Article  Google Scholar 

  39. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In International Conference on Medical image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015).

  40. Jeon, S. et al. Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans. Photoacoustics 15, 100136 (2019).

    Article  Google Scholar 

  41. Matrone, G., Savoia, A. S., Caliano, G. & Magenes, G. The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging. IEEE Trans. Med. Imaging 34, 940–949 (2015).

    Article  Google Scholar 

  42. Rosenthal, A., Ntziachristos, V. & Razansky, D. Acoustic inversion in optoacoustic tomography: a review. Curr. Med. Imaging Rev. 9, 318–336 (2013).

    Article  Google Scholar 

  43. Prahl, S. Assorted Spectra (accessed 19 January 2023); https://omlc.org/spectra/

  44. Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems 23–30 (IEEE, 2017).

  45. Mårtensson, G. et al. The reliability of a deep learning model in clinical out-of-distribution MRI data: a multicohort study. Med. Image Anal. 66, 101714 (2020).

    Article  Google Scholar 

  46. Susmelj, A. K. et al. Signal domain learning approach for optoacoustic image reconstruction from limited view data. In Proc. 5th International Conference on Medical Imaging with Deep Learning 1173–1191 (PMLR, 2022).

  47. Schellenberg, M. et al. Photoacoustic image synthesis with generative adversarial networks. Photoacoustics 28, 100402 (2022).

    Article  Google Scholar 

  48. Jeon, S., Choi, W., Park, B. & Kim, C. A deep learning-based model that reduces speed of sound aberrations for improved in vivo photoacoustic imaging. IEEE Trans. Image Process. 30, 8773–8784 (2021).

    Article  Google Scholar 

  49. Longo, A., Justel, D. & Ntziachristos, V. Disentangling the frequency content in optoacoustics. IEEE Trans. Med. Imaging 41, 3373–3384 (2022).

    Article  Google Scholar 

  50. Tick, J., Pulkkinen, A. & Tarvainen, T. Image reconstruction with uncertainty quantification in photoacoustic tomography. J. Acoust. Soc. Am. 139, 1951 (2016).

    Article  Google Scholar 

  51. Tick, J. et al. Three dimensional photoacoustic tomography in Bayesian framework. J. Acoust. Soc. Am. 144, 2061 (2018).

    Article  Google Scholar 

  52. Hyun, D., Brickson, L. L., Looby, K. T. & Dahl, J. J. Beamforming and Speckle Reduction Using Neural Networks. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 66, 898–910 (2019).

    Article  Google Scholar 

  53. Kang, E., Min, J. & Ye, J. C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 44, e360–e375 (2017).

    Article  Google Scholar 

  54. Moya-Sáez, E., Peña-Nogales, Ó., Luis-García, R. D. & Alberola-López, C. A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data. Comput. Methods Programs Biomed. 210, 106371 (2021).

    Article  Google Scholar 

  55. Kutyniok, G. & Lim, W.-Q. Compactly supported shearlets are optimally sparse. J. Approx. Theory 163, 1564–1589 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  56. Wright, S. J., Nowak, R. D. & Figueiredo, M. A. T. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  57. Chartrand, R. & Wohlberg, B. Total-variation regularization with bound constraints. In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing 766–769 (IEEE, 2010).

  58. Kutyniok, G., Lim, W.-Q. & Reisenhofer, R. ShearLab 3D: faithful digital shearlet transforms based on compactly supported shearlets. In ACM Transactions on Mathematical Software 1–42 (ACM, 2016).

  59. Kunyansky, L. A. Explicit inversion formulae for the spherical mean Radon transform. Inverse Prob. 23, 373–383 (2007).

    Article  MathSciNet  MATH  Google Scholar 

  60. Kuchment, P. & Kunyansky, L. in Handbook of Mathematical Methods in Imaging (ed. Scherzer, O.) 817–865 (Springer, 2011).

  61. Dehner, C. & Zahnd, G. DeepMB v1.0.0 (Zenodo, 2023); https://doi.org/10.5281/zenodo.8169175

Download references

Acknowledgements

We would like to thank A. Longo for her precious contribution during in vivo image acquisition and the conception of Fig. 2, and R. Wilson for his attentive reading and improvements of the manuscript. This project has received funding from the Bavarian Ministry of Economic Affairs, Energy and Technology (StMWi) (DIE-2106-0005// DIE0161/02, DeepOPUS, granted to D.J.) and from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 694968 (PREMSOT, granted to V.N.).

Author information

Authors and Affiliations

Authors

Contributions

C.D. and G.Z. contributed equally to this work. C.D., G.Z., and D.J. conceptualized the initial idea. C.D. and G.Z. implemented the algorithm, conducted the experiments, analysed the results, and wrote the manuscript. D.J. and V.N. supervised the work. All authors provided feedback and approved the manuscript.

Corresponding authors

Correspondence to Vasilis Ntziachristos or Dominik Jüstel.

Ethics declarations

Competing interests

V.N. is an equity owner in and consultant for iThera Medical GmbH. G.Z. and C.D. are employees of iThera Medical GmbH. D.J., C.D. and G.Z. are inventors in patent applications related to DeepMB (patent nos. EP22177153.8 and PCT/EP2023/064714).

Peer review

Peer review information

Nature Machine Intelligence thanks Andreas Hauptmann, and the other, anonymous, reviewers for their contribution to the peer review of this work. Primary Handling Editor: Jacob Huth, in collaboration with the Nature Machine Intelligence editorial team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Visual comparison of backprojection images with negative pixel values set to zero after the reconstruction (BP, i-l) and delay-multiply-and-sum with coherence factor (DMAS-CF, m-p) images, against the corresponding deep model-based (DeepMB, a-d) and model-based (MB, e-h) images.

Visual comparison of backprojection (BP) images with negative pixel values set to zero after the reconstruction (third row) and delay-multiply-and-sum with coherence factor (DMAS-CF, fourth row) images, against the corresponding deep model-based (DeepMB) and model-based (MB) images (first two rows). The presented samples are the same as those depicted in Fig. 2. DeepMB and MB images are nearly identical; BP images notably differ from reference model-based reconstructions suffering from lower resolution (see for example structures shown in zoom A of tile i and zoom D of tile j), missing structures in image regions that contained negative pixel values (see for example zoom F of tile j, or the entire region below the skin line (Sk) in tile k and l), and reduced contrast (see for example structures shown in zoom I of tile k and zoom J of tile l). All images show the reconstructed initial pressure in arbitrary units and were slightly cropped to a field of view of 4.16 × 2.80 cm2 to disregard the area occupied by the probe couplant above the skin line.

Extended Data Fig. 2 Examples of deep model-based and model-based images with low and high data residual norms.

Examples from the in vivo test dataset with low and high data residual norms (namely, below the 5th percentile (a-h) and above the 95th percentile (i-p) of all 4814 test samples, respectively), for deep model-based (DeepMB) and model-based (MB). The data residual norm (R) is indicated between round brackets above each image. Panels (a, e) and (l, p) correspond to the samples for which DeepMB afforded the overall lowest and highest data residual norms, respectively. All images show the reconstructed initial pressure in arbitrary units and were slightly cropped to a field of view of 4.16 × 2.80 cm2 to disregard the area occupied by the probe couplant above the skin line (Sk).

Extended Data Fig. 3 Unmixing of a multispectral biceps scan for deep model-based, model-based, backprojection, and delay-multiply-and-sum with coherence factor reconstructions.

Unmixing of a representative multispectral biceps scan for deep model-based (DeepMB; a, e), model-based (MB; b, f), backprojection (BP; c, g), and delay-multiply-and-sum with coherence factor (DMAS-CF; d, h). The unmixed components for fat and water and for oxyheamoglobin and deoxyhaemoglobin are shown in the first two rows, respectively. The third row depicts the reference absorption spectra of the four chromophores used during unmixing (i) and a schematic sketch of the anatomical context for the depicted scan (j). All optoacoustic images show the unmixed components in arbitrary units and were slightly cropped to a field of view of 4.16 × 2.80 cm2 to disregard the area occupied by the probe couplant above the skin line. Mb: probe membrane, Sk: skin, Fa: fascia, Mu: muscle, Ve: blood vessel, Ne: nerve.

Extended Data Fig. 4 Unmixing of a multispectral abdomen scan for deep model-based, model-based, backprojection, and delay-multiply-and-sum with coherence factor reconstructions.

Unmixing of a representative multispectral abdomen scan for deep model-based (DeepMB; a, e), model-based (MB; b, f), backprojection (BP; c, g) and delay-multiply-and-sum with coherence factor (DMAS-CF; d, h). The unmixed components for fat and water and for oxyhaemoglobin and deoxyhaemoglobin are shown in the first two rows, respectively. The third row depicts the reference absorption spectra of the four chromophores used during unmixing (i) and a schematic sketch of the anatomical context for the depicted scan (j). All optoacoustic images show the unmixed components in arbitrary units and were slightly cropped to a field of view of 4.16 × 2.80 cm2 to disregard the area occupied by the probe couplant above the skin line. Mb: probe membrane, Sk: skin, Fa: fascia, Mu: muscle, Ft: fat, Co: colon.

Extended Data Fig. 5 Unmixing of a multispectral carotid scan for deep model-based, model-based, backprojection, and delay-multiply-and-sum with coherence factor reconstructions.

Unmixing of a representative multispectral carotid scan for deep model-based (DeepMB; a, e), model-based (MB; b, f), backprojection (BP; c, g) and delay-multiply-and-sum with coherence factor (DMAS-CF; d, h). The unmixed components for fat and water and for oxyhaemoglobin and deoxyhaemoglobin are shown in the first two rows, respectively. The third row depicts the reference absorption spectra of the four chromophores used during unmixing (i) and a schematic sketch of the anatomical context for the depicted scan (j). All optoacoustic images show the unmixed components in arbitrary units and were slightly cropped to a field of view of 4.16 × 2.80 cm2 to disregard the area occupied by the probe couplant above the skin line. Mb: probe membrane, Sk: skin, Fa: fascia, Mu: muscle, Ca: common carotid artery, Ju: jugular vein, Th: thyroid, Tr: trachea.

Extended Data Fig. 6 Example images from the alternative model DeepMBinitial-images trained using true initial pressure reference images.

Representative examples showing the inaptitude of the alternative model DeepMBinitial-images (that is, trained on true initial pressure images) to reconstruct in vivo images. The three rows depict different anatomies (elbow: a–e, abdomen: f–j, calf: k–o). The three leftmost columns correspond to images reconstructed via model-based (MB), alternative DeepMBinitial-images, and standard DeepMB. The two rightmost columns show the absolute differences between the reference model-based image and the image inferred from DeepMBinitial-images and DeepMB, respectively. The field of view is 4.16 × 4.16 cm2, the enlarged region is 0.61 × 0.61 cm2.

Extended Data Fig. 7 Example images from the alternative model DeepMBin-vivo trained using in vivo data.

Representative examples of reconstruction artefacts (red arrows) from alternative models DeepMBin-vivo (that is, trained on in vivo data instead of synthesized data). The three rows depict different anatomies (biceps: a–e, breast: f–j, thyroid: k–o). The three leftmost columns correspond to images reconstructed via model-based (MB), alternative DeepMB trained on in vivo data (DeepMBin-vivo), and standard DeepMB (DeepMB). The two rightmost columns show the absolute differences between the reference model-based image and the image inferred from DeepMBin-vivo and DeepMB, respectively. The field of view is 4.16 × 4.16 cm2, the enlarged region is 0.61 × 0.61 cm2.

Extended Data Table 1 Quantitative evaluation of deep model-based and model-based reconstructions for different aggregations of the in vivo test dataset

Supplementary information

Reporting Summary

Supplementary Video 1

Carotid artery continuously imaged in the transversal view at 800 nm.

Supplementary Video 2

Biceps continuously imaged in the transversal view at 800 nm, while the SOS is gradually adjusted via a series of DeepMB reconstructions.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dehner, C., Zahnd, G., Ntziachristos, V. et al. A deep neural network for real-time optoacoustic image reconstruction with adjustable speed of sound. Nat Mach Intell 5, 1130–1141 (2023). https://doi.org/10.1038/s42256-023-00724-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-023-00724-3

This article is cited by

Search

Quick links

Nature Briefing: Translational Research

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

Get what matters in translational research, free to your inbox weekly. Sign up for Nature Briefing: Translational Research