Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

A deep-learning model for transforming the style of tissue images from cryosectioned to formalin-fixed and paraffin-embedded

Abstract

Histological artefacts in cryosectioned tissue can hinder rapid diagnostic assessments during surgery. Formalin-fixed and paraffin-embedded (FFPE) tissue provides higher quality slides, but the process for obtaining them is laborious (typically lasting 12–48 h) and hence unsuitable for intra-operative use. Here we report the development and performance of a deep-learning model that improves the quality of cryosectioned whole-slide images by transforming them into the style of whole-slide FFPE tissue within minutes. The model consists of a generative adversarial network incorporating an attention mechanism that rectifies cryosection artefacts and a self-regularization constraint between the cryosectioned and FFPE images for the preservation of clinically relevant features. Transformed FFPE-style images of gliomas and of non-small-cell lung cancers from a dataset independent from that used to train the model improved the rates of accurate tumour subtyping by pathologists.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Cryosection to FFPE translation workflow.
Fig. 2: AI-FFPE method architecture.
Fig. 3: SAB and SR loss function effect on the model performance.
Fig. 4: Improvement of artefacts in the brain frozen sections.
Fig. 5: Improvement of artefacts in the lung frozen sections.
Fig. 6: AI-FFPE enhancement of tumour-specific diagnostic patterns.

Similar content being viewed by others

Data availability

The TCGA diagnostic whole-slide data (GBM, LGG, LUAD and LUSC) and the corresponding labels are available from the NIH genomic data commons (https://portal.gdc.cancer.gov). Restrictions apply to the availability of in-house data, which were used with institutional permission for the purposes of this project. All requests for access to in-house data may be addressed to the corresponding authors and will be processed in accordance with institutional guidelines. Data can only be shared for academic research purposes and will require a material-transfer or data-transfer agreement with the receiving institution.

Code availability

All codes were implemented in Python using PyTorch as the primary deep-learning library. The complete pipeline for processing whole-slide images and for training and evaluating the deep-learning models is available at the AI-FFPE repository at https://github.com/DeepMIALab and can be used to reproduce the experiments reported in this paper.

References

  1. Brown, R. W. Histologic Preparations: Common Problems and Their Solutions (College of American Pathologists, 2009).

  2. Jaafar, H. Intra-operative frozen section consultation: concepts, applications and limitations. Malays. J. Med. Sci. 13, 4–12 (2006).

    Google Scholar 

  3. Oh, E. et al. Comparison of accuracy of whole-exome sequencing with formalin-fixed paraffin-embedded and fresh frozen tissue samples. PLoS ONE 10, e0144162 (2015).

    Article  Google Scholar 

  4. Pichat, J., Iglesias, J. E., Yousry, T., Ourselin, S. & Modat, M. A survey of methods for 3D histology reconstruction. Med. Image Anal. 46, 73–105 (2018).

    Article  Google Scholar 

  5. Renne, S., Redaelli, S. & Paolini, B. Cryoembedder, automatic processor/stainer, liquid nitrogen freezing, and manual staining for frozen section examination: a comparative study. Acta Histochem. 121, 761–764 (2019).

    Article  CAS  Google Scholar 

  6. Patil, P., Shukla, S., Bhake, A. & Hiwale, K. Accuracy of frozen section analysis in correlation with surgical pathology diagnosis. Int. J. Res. Med. Sci. 3, 399 (2015).

    Article  Google Scholar 

  7. Bittar, H., Incharoen, P., Althouse, A. & Dacic, S. Accuracy of the IASLC/ATS/ERS histological subtyping of stage I lung adenocarcinoma on intraoperative frozen sections. Mod. Pathol. 28, 1058–1063 (2015).

    Article  Google Scholar 

  8. Rogers, C., Klatt, E. C. & Chandrasoma, P. Accuracy of frozen-section diagnosis in a teaching hospital. Arch. Pathol. Lab. Med. 111, 514–517 (1987).

    CAS  Google Scholar 

  9. Cho, H. J., Lim, S., Choi, G. & Min, H. Neural stain-style transfer learning using GAN for histopathological images. JMLR: Workshop and Conference Proceedings 80, 1–10 (2017).

  10. Tofte, K., Berger, C., Torp, S. & Solheim, O. The diagnostic properties of frozen sections in suspected intracranial tumors: a study of 578 consecutive cases. Surg. Neurol. Int. 5, 170 (2014).

    Article  Google Scholar 

  11. Adesina, A. M. Frozen section diagnosis of pediatric brain tumors. Surg. Pathol. Clin. 3, 769–796 (2010) .

    Article  Google Scholar 

  12. Predina, J., Keating, J., Patel, N., Nims, S. & Singhal, S. Clinical implications of positive margins following non-small cell lung cancer surgery. J. Surg. Oncol. 113, 264–269 (2015).

    Article  Google Scholar 

  13. Marchevsky, A. M. et al. Frozen section diagnoses of small pulmonary nodules: accuracy and clinical implications. Ann. Thorac. Surg. 78, 1755–1759 (2004).

    Article  Google Scholar 

  14. Zin, A. A. M. & Zulkarnain, S. Diagnostic accuracy of cytology smear and frozen section in glioma. Asian. Pac. J. Cancer Prev. 20, 321–325 (2019).

    Article  CAS  Google Scholar 

  15. Obeidat, F. et al. Accuracy of frozen-section diagnosis of brain tumors: an 11-year experience from a tertiary care center. Turk. Neurosurg. 29, 242–246 (2018).

    Google Scholar 

  16. Xiang, Z. et al. An effective inflation treatment for frozen section diagnosis of small-sized lesions of the lung. J. Thorac. Dis. 12, 1488–1495 (2020).

    Article  Google Scholar 

  17. Coudray, N. et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 24, 1559–1567 (2018).

    Article  CAS  Google Scholar 

  18. Chen, P.-H. C. et al. An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. Nat. Med. 25, 1453–1457 (2019).

    Article  CAS  Google Scholar 

  19. Campanella, G. et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 25, 1 (2019).

    Article  Google Scholar 

  20. Lu, M. et al. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat. Biomed. Eng. 5, 555–570 (2021).

    Article  Google Scholar 

  21. Goodfellow, I. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 3, 2672–2680 (2014).

    Google Scholar 

  22. Creswell, A. et al. Generative adversarial networks: an overview. IEEE Signal Process. Mag. 35, 53–65 (2017).

    Article  Google Scholar 

  23. Bentaieb, A. & Hamarneh, G. Adversarial stain transfer for histopathology image analysis. IEEE Trans. Med. Imaging 37, 792–802 (2018).

    Article  Google Scholar 

  24. Bobrow, T. L., Mahmood, F., Inserni, M. & Durr, N. J. Deeplsr: a deep learning approach for laser speckle reduction. Biomed. Opt. Express 10, 2869–2882 (2019).

    Article  Google Scholar 

  25. Almalioglu, Y. et al. Endol2h: deep super-resolution for capsule endoscopy. IEEE Trans. Med. Imaging 39, 4297–4309 (2020).

    Article  Google Scholar 

  26. Mahmood, F., Chen, R. J. & Durr, N. Unsupervised reverse domain adaptation for synthetic medical images via adversarial training. IEEE Trans. Med. Imaging 37, 2572–2581 (2018).

    Article  Google Scholar 

  27. Rivenson, Y. et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat. Biomed. Eng. 3, 466–477 (2019).

    Article  CAS  Google Scholar 

  28. de Haan, K. et al. Deep learning-based transformation of H&E stained tissues into special stains. Nat. Commun. 12, (2021).

  29. Sorin, V., Barash, Y., Konen, E. & Klang, E. Creating artificial images for radiology applications using generative adversarial networks (GANS)—a systematic review. Acad. Radiol. 27, 1175–1185 (2020).

    Article  Google Scholar 

  30. Siller, M. et al. On the acceptance of “fake” histopathology: a study on frozen sections optimized with deep learning. J. Pathol. Inform. 13, 6 (2022).

    Article  Google Scholar 

  31. Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV) 2242–2251 (2017).

  32. Benaim, S. & Wolf, L. One-sided unsupervised domain mapping. In Advances in Neural Information Processing Systems (eds. Guyon, I. et al.) vol. 30 (Curran Associates, 2017). https://proceedings.neurips.cc/paper/2017/file/59b90e1005a220e2ebc542eb9d950b1e-Paper.pdf

  33. Amodio, M. & Krishnaswamy, S. Travelgan: image-to-image translation by transformation vector learning. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 8975–8984 (2019).

  34. Fu, H. et al. Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2242–2251 (IEEE, 2019).

  35. Liu, M.-Y., Breuel, T. & Kautz, J. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems (eds. Guyon, I. et al.) vol. 30 (Curran Associates, 2017). https://proceedings.neurips.cc/paper/2017/file/dc6a6489640ca02b0d42dabeb8e46bb7-Paper.pdf

  36. Huang, X., Liu, M.-Y., Belongie, S. & Kautz, J. Multimodal unsupervised image-to-image translation. In Computer Vision – ECCV 2018 (eds. Ferrari, V., Hebert, M., Sminchisescu, C. & Weiss, Y.) 179–196 (Springer, 2018).

  37. Park, T., Efros, A., Zhang, R. & Zhu, J.-Y. Contrastive learning for unpaired image-to-image translation. In ECCV 319–345 (2020).

  38. Wu, Z., Xiong, Y., Yu, S. X. & Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3733–3742, (IEEE, 2018).

  39. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (2016).

  40. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 5967–5976 (2017).

  41. Mao, X. et al. Least squares generative adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV) 2813–2821 (2017).

  42. Falahkheirkhah, K. et al. A generative adversarial approach to facilitate archival-quality histopathologic diagnoses from frozen tissue sections. Lab. Invest. 102, 554–559 (2021).

    Article  Google Scholar 

  43. Gutmann, U. & Hyvärinen, A. Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (Eds. Teh, Y.W. and Titterington, M.). PMLR 9, 297–304, (Proceedings of Machine Learning Research, 2010).

  44. Wu, Z., Xiong, Y., Yu, S. X. & Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 3733–3742 (2018).

  45. Chen, T., Kornblith, S., Norouzi, M. & Hinton, G. E. A simple framework for contrastive learning of visual representations. ICML’20: Proceedings of the 37th International Conference on Machine Learning, 149, 1597–1607, 2020.

  46. Kingma, D. & Ba, J. Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR, 2014).

  47. Ulyanov, D., Vedaldi, A. & Lempitsky, V. S. Instance normalization: the missing ingredient for fast stylization. Preprint at https://arxiv.org/abs/1607.08022 (2016).

  48. Glorot, X. & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. J. Mach. Learn. Res. Proc. Track 9, 249–256 (2010).

    Google Scholar 

  49. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. & Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing System 30, 6629–6640 (2017).

  50. Hodoglugil, U. & Mahley, R. Turkish population structure and genetic ancestry reveal relatedness among eurasian populations. Ann. Hum. Genet. 76, 128–41 (2012).

    Article  Google Scholar 

  51. Fleiss, J. L., Levin, B. and Paik, M. C. Statistical Methods for Rates and Proportions (Wiley, 2003).

Download references

Acknowledgements

M.T., K.B.O. and G.I.G. are grateful to the Scientific and Technological Research Council of Turkey (TUBITAK) for a 2232 International Outstanding Researcher Fellowship and to TUBITAK Ulakbim for the Turkish National e-Science e-Infrastructure (TRUBA)-cluster and data-storage services. We also thank Ö. Asar and H. Okut for their guidance and assistance in evaluation of the results.

Author information

Authors and Affiliations

Authors

Contributions

M.T., F.M. and K.B.O. conceived the study and designed the experiments. K.B.O. performed the experimental analysis. B.D., G.I.G., M.Y.L., E.K., D.D., K.B. and T.Y.C. curated the training and test datasets. K.B.O., S.C., K.B., D.D., G.S., M.T. and F.M. analysed the results. K.B.O., S.C., U.P.H., F.Y., D.F.K.W., M.T. and F.M. prepared the manuscript. M.T. and F.M. supervised the research.

Corresponding authors

Correspondence to Faisal Mahmood or Mehmet Turan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Biomedical Engineering thanks Geert Litjens and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Comparison of all bench-marked methods’ improvement of various artefacts in brain tissue sections.

Comparison of all bench-marked methods’ improvement of various artefacts in brain tissue sections} AI-FFPE are compared to several unsupervised image-to-image translation methods such as CyleGAN, FastCUT, AI-FFPE without spatial attention block integration but with SR-Loss (AI-FFPE w/o SAB), AI-FFPE without SR loss integration but with SAB (AI-FFPE w/o SR loss), AI-FFPE without both SR loss and SAB (AI-FFPE w/o SAB and SR loss).

Extended Data Fig. 2 Comparison of all bench-marked methods’ improvement of various artefacts in lung tissue sections.

Under the constrain of cycle consistency loss, CycleGAN, in most of the cases, does not implement any changes on the input FS images. Also, FastCUT’s contrastive learning is useful to maximize shared content-related features in between input and synthesized image patches, however, lacks an essential quality required for the improvement of lung WSIs and that is to determine the tissue edges, such as boundaries of vessels or airways, beyond which has to remain untouched.

Extended Data Fig. 3 Cross-organ adaptability of AI-FFPE.

a When the brain patches were processed with the lung-trained AI-FFPE model, many improvements that are observed in the same images processed by the brain-trained model were replicated. However, the staining appeared more red/pink. b Similar but more severe colouring problem is present here. Some cells and nuclei became pink and lost their cellular characteristics becoming unrecognizable. The folding artefact was not rectified as efficiently as it is in the brain-model tested version. However, slightly more of empty spaces are filled with extracellular matrix(ECM) in the lung-model tested version, which did not add any significant diagnostic information compared its brain-model tested version. c When the lung patch was tested on some colouring difference and slight differences in ECM patterns in the parenchyma. Both models preserved the empty space in the middle d staining in more red side of the spectrum, the empty areas are more intensively filled probably due to more dense nature of brain tissue which does not harbour empty spaces filled with air.

Extended Data Fig. 4 Exemplars of failure cases.

a In rare cases of severe freezing artefacts, the model cannot reverse the dislocation of the cells/nuclei resulting from expanding ice crystals pushing the tissue away from its original location. The empty clefts where the ice crystals are formed are filled with the ECM, creating a mesh-like appearance. b The models sometimes show sub-optimal performance in correcting severe chatter artefacts because the images in the FFPE target domain also exhibit a relatively high frequency of chatter artefacts. c Although, in the vast majority of cases, corrected images with folding artefacts show clear clues indicating that the original patches contain folding artefacts, rarely, it might take a bit longer for the examiner to recognise the increase in cell densities is actually due to the corrected folding artefacts. d The arrows show examples of unnecessary orange-pink colouring of some cells/areas. However, these colour aberrations are uncommon and do not seem to affect the diagnostically meaningful patterns present in the images.

Extended Data Fig. 5 Improvement of output image quality throughout the network training.

Network output images for the brain and lung tissue section at different stages of the learning process, that is, after 10k, 50k, 100k, 200k, 400k and 600k In brain sections, the visibility of astrocytic glial neoplastic and stromal cell nuclei, as well as the fibrillar structures improve through iterations. Even though the visual enhancement in the lung tissue sample is highly challenging due to alveolar architecture, significant restoration of the connective tissue is observed as the training progress. At the beginning of training, the diagnostically misguiding regions such as artificial presence of bleeding and blurred nuclear boundary were frequently observed in AI-FFPE patches. However, these issues have been resolved at the end of five epoch.

Supplementary information

Supplementary Information

Supplementary figures and tables.

Reporting Summary

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ozyoruk, K.B., Can, S., Darbaz, B. et al. A deep-learning model for transforming the style of tissue images from cryosectioned to formalin-fixed and paraffin-embedded. Nat. Biomed. Eng 6, 1407–1419 (2022). https://doi.org/10.1038/s41551-022-00952-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41551-022-00952-9

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing