Abstract
Diagnostic pathology, historically dependent on visual scrutiny by experts, is essential for disease detection. Advances in digital pathology and developments in computer vision technology have led to the application of artificial intelligence (AI) in this field. Despite these advancements, the variability in pathologists’ subjective interpretations of diagnostic criteria can lead to inconsistent outcomes. To meet the need for precision in cancer therapies, there is an increasing demand for accurate pathological diagnoses. Consequently, traditional diagnostic pathology is evolving towards “next-generation diagnostic pathology”, prioritizing on the development of a multi-dimensional, intelligent diagnostic approach. Using nonlinear optical effects arising from the interaction of light with biological tissues, multiphoton microscopy (MPM) enables high-resolution label-free imaging of multiple intrinsic components across various human pathological tissues. AI-empowered MPM further improves the accuracy and efficiency of diagnosis, holding promise for providing auxiliary pathology diagnostic methods based on multiphoton diagnostic criteria. In this review, we systematically outline the applications of MPM in pathological diagnosis across various human diseases, and summarize common multiphoton diagnostic features. Moreover, we examine the significant role of AI in enhancing multiphoton pathological diagnosis, including aspects such as image preprocessing, refined differential diagnosis, and the prognostication of outcomes. We also discuss the challenges and perspectives faced by the integration of MPM and AI, encompassing equipment, datasets, analytical models, and integration into the existing clinical pathways. Finally, the review explores the synergy between AI and label-free MPM to forge novel diagnostic frameworks, aiming to accelerate the adoption and implementation of intelligent multiphoton pathology systems in clinical settings.
Similar content being viewed by others
Introduction
Pathology often provides the “gold standard” for disease diagnosis1. Historically, this discipline has relied on the keen eyes of pathologists to make clinical judgments based on visual examinations of stained tissue sections under a microscope to classify diseases and determine their prognoses. The advent of whole slide imaging (WSI) scanners in the last decade has transformed how these images are collected and examined, ushering in a new era for pathology2. WSIs have become a cornerstone for remote pathology consultations, routinely facilitating diagnosis, research, and education in pathology, offering unprecedented convenience to practitioners. However, the reliance on the extensive expertise of experienced pathologists persists, whether reviewing slides under a microscope or analyzing WSIs. The training cycle for professional pathologists remains lengthy and demanding. This factor, coupled with the growing number of cases needing diagnosis annually, poses an escalating challenge for medical institutions striving to maintain high-quality diagnostic services.
Artificial intelligence (AI), predominantly driven by deep learning, has shown its superiority in various computer vision applications including image enhancement3,4,5, classification6, detection7, and segmentation by automatically recognizing and extracting complex features from images8. Simultaneously, the availability of large-scale WSI datasets rich in pixel-level detail has allowed the expansion of deep learning techniques, traditionally applied to natural images, to the realm of microscopic imagery. The digital transformation of clinical pathology has led to the automation of various aspects of the field, integrating supportive diagnostic techniques such as diagnosis9, biomarker identification10, and prediction11, which combinedly to be computational pathology.
The WSI-based intelligent pathology not only alleviates the burden on pathologists but also provides both patients and clinicians with more objective tools for diagnosis and prognosis. Nevertheless, diagnostic procedures within pathology still face a “gray zone”, where varying interpretations of diagnostic criteria among pathologists can lead to discrepancies in the diagnosis of certain conditions. The precision required for personalized cancer therapy further escalates the need for accurate tissue pathology marker diagnosis, whereas misdiagnoses can lead to misguided treatments and can also hinder the progress of drug development. Consequently, to develop a new generation of pathological diagnostic paradigms, it is essential not only to create an AI-assisted diagnostic framework but also to incorporate innovative multimodal imaging techniques that enhance conventional pathology.
The most commonly pathological staining method is hematoxylin and eosin (H&E) staining. Irrespective of intraoperative frozen sections or postoperative paraffin sections, the production of H&E slides involves intricate histological procedures such as biopsy, fixation, sectioning, and staining. With the advancement of label-free optical microscopy12,13, techniques such as quantitative phase imaging (QPI)14,15, photoacoustic microscopy (PAM), optical coherence tomography (OCT)16, and stimulated Raman scattering (SRS) microscopy have complemented traditional pathology. They offer unique insights into cellular physical parameters in vitro, functional imaging in vivo, and tissue molecular characteristics. Notably, multiphoton microscopy (MPM) enables simultaneous imaging of multiple intrinsic components within biological tissues. Moreover, it attains imaging contrast and resolution comparable to traditional histopathology, directly extracting qualitative microstructure and quantitative spectral features for pathological diagnosis17. Enabled by deep learning methodologies, OCT facilitated the automated detection of geographic atrophy in age-related macular degeneration18. QPI allowed for virtual quantitative fluorescent imaging of live organoids19, while PAM allowed intraoperative histology of bone tissue20. Additionally, SRS provided near real-time intraoperative diagnosis of brain tumors, creating a complementary diagnostic pathway independent of traditional pathology laboratories21. These technologies significantly enhance the accuracy and efficiency of diagnosis. For AI-empowered MPM, comprehensive exploration of multiphoton feature patterns, such as tumor infiltration patterns22,23,24,25 and vascular collagen deposition17,26,27, has been integrated to achieve distinctive auxiliary diagnosis28,29,30,31,32,33,34,35,36,37,38,39,40,41 and prognosis prediction42 capabilities, holding great promise for clinical translation.
In this review, we first provide a concise overview of multiphoton physics mechanisms and multiphoton microscopic instrument. Subsequently, we systematically summarize pathological applications of MPM in various human diseases. Drawing on multiphoton pathological imaging, we explore the positive impact of artificial intelligence — extending from machine learning to deep learning — in advancing diagnostics assisted by multiphoton pathology. Finally, considering the current status of multiphoton intelligent pathology and the requirements for precision diagnostics, we discuss the challenges and future perspectives associated with the integration of MPM and AI. We anticipate that this review will contribute to the clinical translation and intelligent applications of multiphoton microscopy, fostering progress in “next-generation diagnostic pathology”.
Label-free multiphoton microscopy
Label-free optical microscopy exploits the interaction of light with biological tissues, such as refractive index, molecular vibrations, scattering, or absorption, to achieve various imaging contrasts. Table 1 provides a comparative overview of the capabilities and applications of common label-free biomedical microscopy. The diverse imaging mechanisms of these techniques render them suitable for different clinical applications. QPI measures phase changes to obtain contour and morphology information of in vitro cell samples. This computational-based optical system is both simple and cost-effective. PAM and OCT achieve imaging depths at the millimeter scale. Although they sacrifice some spatial resolution, there in vivo vasculature and ophthalmology applications are also clinically recognized.
To obtain high-resolution, high-contrast images resembling those produced by traditional pathology, two nonlinear optical microscopies, MPM and SRS have been widely applied in label-free pathological diagnosis. SRS not only acquires pathological images but also enables selective Raman spectral analysis of components such as lipids and proteins. However, the complexity of the excitation light source module in current SRS systems has hindered its widespread commercial adoption, and further exploration is needed to fully establish its indications for pathological diagnosis. In contrast, commercial multiphoton microscopes, based on SHG and TPEF, have reached a high level of maturity and availability since their inception in 1997. MPM has been applied to examine tumor pathology in as many as 16 human organs, such as brain tumors17,43,44,45,46,47, breast cancer23,24,48,49,50,51,52,53, and colorectal cancer30,54,55,56,57. Consequently, MPM was highlighted as one of the significant advancements in label-free histopathology in the 2016 research highlights of Nature Methods58.
The principle of multiphoton microscopy
MPM requires high peak power from ultra-short pulse lasers. To capture a multiphoton image of a single field of view, the excitation light scans the specimen point-by-point and line-by-line via scanning system and objective. When multiple low-energy photons simultaneously reach the fluorophores or specific structures in specimen, they interact to produce multiphoton optical signals, including two-photon/three-photon excited fluorescence and second/third harmonic generation. These signals are typically collected in an epi-detection configuration by the objective and guided onto the photomultiplier tubes, which convert the optical information into electrical signals. By utilizing an XY translation stage to sequentially capture images from each position within the specimen, a large-scale stitched image can be constructed.
TPEF
TPEF is a third-order nonlinear absorption process. In this process, a fluorescent molecule or atom simultaneously absorbs two photons of the same frequency. During the absorption process, electrons in the ground state are first excited to an intermediate “virtual state” by one photon and then further excited to the final excited state by another photon. In other words, absorption of two photons of the same frequency excites electrons to a higher energy level. Following a certain relaxation time, electrons in the excited state spontaneously transition back to the ground state, emitting fluorescence with a frequency slightly lower than twice the incident light frequency.
SHG
SHG is a second-order nonlinear optical phenomenon, also known as “frequency doubling”. It refers to the output photons having twice the frequency of the incident photons when two photons of the same frequency interact with a nonlinear medium. The output second harmonic wave is termed the second harmonic. In the process of second harmonic generation, an electron in the ground state absorbs two photons of the same frequency, is excited to a virtual state, and then emits a second harmonic photon before returning to the ground state.
Endogenous signal sources
In biological tissues, numerous biomolecules exhibit TPEF and SHG signals. For instance, TPEF can image endogenous fluorophores such as nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD)59. SHG occurs in non-centrosymmetric molecular structures like collagen60, microtubules61, and myosin62. Thus, SHG and TPEF endogenous signals provides a comprehensive characterization of tissue structure and multi-parameter functional metabolism. This approach overcomes the influence of labeled biological processes or toxicity, offering a crucial tool for studying pathological samples. Taking the example of multiphoton imaging of cerebral vascular malformations, Fig. 1a illustrates images from the SHG and two TPEF detection channels, along with a schematic representation of endogenous signal sources17. Detailed endogenous signal sources have been summarized in previous references63,64.
Multiphoton microscopic instrument
Figure 1b illustrates the representative history of AI-empowered label-free MPM65,66,67,68,69,70,71,72,73. In 1931, Maria Goeppert-Mayer proposed the concept of TPEF74. Thirty years later, the invention of the laser facilitated the first experimental verification of TPEF75. In 1974, Robert Hellwarth introduced the SHG microscope, utilized for observing spatial structural changes in ZnSe crystals76. In 1990, the Webb group introduced the concept of two-photon excitation fluorescence microscopy, marking DNA in pig kidney cells and observing chromosome morphology in live cells65. In 1997, the Bio-Rad company produced the first commercial multiphoton laser scanning microscope. Currently, globally microscope companies are continually innovating desktop multiphoton laser scanning microscopy, greatly advancing the life sciences77,78.
The commercial research-grade multiphoton microscopes, designed to meet the needs of most researchers, typically can simultaneously image both labeled and unlabeled specimens. However, the large equipment footprint of such microscopes requires placement on laboratory optical platforms, and their high cost discourages some users. Therefore, researchers have been devoted to developing more portable and economical multiphoton microscopes. Although the miniaturized design and integration technology may sacrifice image resolution or field of view, this trade-off makes the instrument more portable, suitable for applications in on-site pathology diagnosis and other widespread diagnosis scenarios. In 2017, fast high-resolution miniature two-photon microscopy was successfully applied to brain imaging in freely behaving mice67. In 2018, the multimodal label-free nonlinear imaging system was implemented to intraoperatively characterize the tumor microenvironment52. Excitingly, in 2023, the space station-level two-photon microscope achieved the first three-dimensional images of astronauts’ skin. For future challenges in the development of multiphoton microscopic instrument from research-grade to pathological-grade, please refer to Section 6.3.
With the continuous development of multiphoton instruments, there has been a significant emergence of pathological applications in MPM. Meanwhile, the rise of artificial intelligence technology enables multiphoton intelligent pathology. Section 4 introduces the applications of multiphoton pathology, while Section 5 focuses on AI-empowered multiphoton pathology diagnosis.
Applications of multiphoton microscopy in pathological diagnosis
Label-free MPM, with its specific identification of cellular cytoplasm, extracellular matrix, and their interactions, has opened a novel perspective in pathological research. This section summarizes the applications of MPM in pathological diagnosis through the exploration of multiphoton diagnostic features.
Firstly, multiphoton imaging of the cytoplasm reveals rich cellular morphological information, such as cancer cells79, hyperplasia43,55, and necrosis44, which is crucial for determining the grading and prognosis of tumors. Additionally, through the analysis of specific features of cancer nests, different tumor growth patterns80 can be distinguished, providing a basis for the formulation of clinical treatment plans. It is noteworthy that MPM can also quantitatively reflect cellular metabolic activity by measuring the ratio of NADH to FAD in the cytoplasm81. In addition to cancer cell identification, MPM can differentiate other cell types, such as myoepithelial cells82, lymphocytes37, and glandular cells83, based on differences in cytoplasmic morphology and signal intensity. Taking myoepithelial cells as an example, this provides crucial features for challenging diagnoses such as microinvasive breast cancer.
Secondly, MPM exhibits high sensitivity to the extracellular matrix, especially collagen fibers84 and basement membrane80. By analyzing the morphology of collagen fibers, different vascular patterns in tumors can be distinguished43, aiding in the assessment of malignancy and progression of tumors. For instance, observations of glomeruloid vessels in glioblastomas85, hyaline degeneration and collagen aging in cerebral cavernous malformations17. Furthermore, the quantification of fibrosis45 and proliferative reactions86 can be facilitated by extracting features of collagen fibers, which provides crucial evidence for disease progression. More importantly, combining information from the cytoplasm and extracellular matrix, MPM can observe diverse spatial distribution patterns, such as tumor-associated collagen signatures (TACS)23 and tumor-infiltrating lymphocytes (TILs)24, offering a unique perspective on the occurrence and development of infiltrating tumors such as gastric cancer, colorectal cancer, and breast cancer.
Figure 2 presents a representative multiphoton pathological atlas of different diseases (2013–2023), encompassing both tumor23,37,54,87,88,89,90,91,92,93,94,95 and non-tumor components28,96,97. We prioritized articles that included corresponding pathological staining images for multiphoton images. Figure 3 illustrates typical multiphoton diagnostic features of breast cancer98,99. Besides, Table 2 provides a detailed summary of the imaging parameters and typical multiphoton pathological characteristics of MPM applied to both tumor100,101,102,103,104,105,106,107,108,109,110,111,112,113,114 and non-tumor115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131 diseases.
AI-empowered multiphoton pathological diagnosis
Prior to the application of AI in multiphoton images, conventional digital image processing algorithms, such as collagen fiber analysis132 and saliency detection133, were already in existence for the quantitative assessment of tumor-specific multiphoton diagnostic features. In this section, as shown in Fig. 4, we specifically focus on the work related to machine learning and deep learning, providing a brief overview of their empowered capabilities in pathological diagnosis, including image preprocessing, disease diagnosis, and prognosis prediction.
The quality of multiphoton images serves as a prerequisite for ensuring the accuracy of disease diagnosis and prognosis prediction. Therefore, before using multiphoton images for disease diagnosis, researchers often employ image preprocessing techniques, such as image restoration73,134,135 and image super-resolution models136,137,138, to enhance the image textural details and restore hidden pathological features. For instance, to address image quality caused by uneven sample or system instability, adaptive sampling driven by the uncertainty of predicted pixels can be employed to reduce noise134. For stitched multiphoton images, stripe self-correction networks based on proximity sampling scheme can effectively correct stripes or artifacts in the stitched positions135. Additionally, a self-alignment dual-attention-guided super-resolution network can produce high-quality multiphoton images while mitigating the risk of photobleaching138. These preprocessed high-resolution, high-contrast multiphoton images can further enhance the accuracy of downstream diagnostic tasks, such as cell segmentation and counting, and improve the precision of prognostic tasks related to the extraction of collagen features.
On the other hand, as multiphoton imaging gradually enters the field of pathology, virtual image generation techniques serve as a complementary form of preprocessing that enhances the acceptance of multiphoton pathology images. Virtual image generation encompasses the transformation from label-free multiphoton images to virtual pathological staining images17,72,139,140, as well as the generation of virtual multiphoton images from H&E-stained images141. For instance, virtual staining models based on generative adversarial networks (GANs)17 or convolutional neural networks (CNNs)72,139,140,141 can transform multi-channel multiphoton images into H&E staining or specific staining images. Although virtual staining images may sometimes deviate in detail from real stained images, these pathology-styled images assist pathologists in interpreting multiphoton images more effectively. Moreover, CNN architectures with pixel-shuffle layers can generate virtual SHG images directly from H&E-stained images141, eliminating the need for additional staining agents or equipment. This provides a cost-effective method for quantitatively extracting collagen fiber directionality and alignment features. These preprocessing steps provide the foundation for subsequent pathological analysis, facilitating more accurate diagnosis and prognosis.
Currently, diagnostic challenges persist in contemporary pathology, with specific scenarios proving particularly difficult to interpret with precision. Notable examples include the differentiation between glioblastoma and primary lymphoma142, and between ductal carcinoma in situ and microinvasive carcinoma of the breast82. These diagnostic challenges share similarities, often cannot be addressed through conventional pathological techniques. For instance, H&E staining struggles to differentiate or accurately quantify vascular-related elastic fibers and collagen fibers. Additionally, specialized cells such as myoepithelial and basal cells are prone to confusion with neighboring proliferative fibroblasts in the stroma. Excitingly, MPM offers a promising solution that aids in the identification of ambiguous cells, which helps mitigate the subjective differences among pathologists. More importantly, the integration of AI introduces a level of objectivity, supplying auxiliary information that enhances the subjective visual diagnosis performed by human experts. Researchers commonly utilize machine learning methods and deep learning models to automatically extract distinct features of cellular cytoplasm and extracellular matrix from multiphoton images. For instance, segmentation models based on U-Net are employed to extract multiphoton features such as elastic fibers28 and cells29, enabling rapid detection and quantification of pathological regions. Besides, a novel diagnostic method has been developed by fusing the H&E segmentation results of cell nuclei with multiphoton images, leading to more accurate diagnoses of microinvasion in ductal carcinoma in situ82. The method of combining feature extraction methods with machine learning classifiers, such as support vector machine (SVM)33 or stochastic gradient descent (SGD)30 classifier, has shown superior performance in classifying diseases, particularly on small datasets. In contrast, using deep learning classification networks such as ResNet34 or VGG35 allows for the automatic learning of complex pattern.
Prognostic prediction is of paramount significance for understanding disease progression and guiding patient treatment. A robust prognostic prediction model is often associated with the accuracy of pathological diagnostic results and the discovery of pathological novel insights. For instance, based on the tumor-associated collagen signature patterns revealed by MPM in invasive breast cancer, the integration of graph neural networks has facilitated a deeper interpretation of the spatial distribution of these patterns in tumor development42. This approach also provides new clues for the precise classification and treatment of different breast cancer subtypes. With the gradual accumulation of multiphoton datasets, AI-empowered MPM augments the dimensions and efficiency of traditional pathology, elevating multiphoton-assisted diagnosis to a more intelligent and precise level, thereby assisting clinicians in improving the diagnostic accuracy of intractable cases. Table 3 provides a detailed summary of the model types, inputs, and outputs involved in representative label-free multiphoton image preprocessing and intelligent pathological diagnosis from 2013 to 202317,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,72,73,134,135,136,137,138,139,140,141.
Challenges and future perspectives
AI-enhanced multiphoton pathology has markedly advanced the integration of multiphoton microscopes into practical clinical settings, and intelligent multiphoton pathology diagnosis has reached a comparable performance to human experts in certain tasks such as conventional H&E diagnostics. However, only a handful of these algorithms have been successfully integrated into standard clinical processes. As a result, the realization of effective artificial intelligence-based pathology diagnostics using multiphoton imagery remains fraught with challenges. As illustrated in Fig. 5, we will address these obstacles specifically associated with intelligent multiphoton pathology, examining the aspects of multiphoton imaging technologies, dataset acquisition, the development of deep learning algorithms, and their integration into clinical diagnostic procedures.
Multiphoton digital pathological diagnostic instrument
High-speed and high-throughput capability
In comparison to clinical digital slide scanners, current multiphoton microscopes are still limited by imaging speed, cost, and image field of view. Particularly, there is an urgent need for a novel pathology imaging instrument with high-speed and high-throughput capabilities, similar to that of a digital slide scanner. The imaging speed of current multiphoton microscopes is primarily attributed to the scanning speed of two-axis mechanical scanning mirrors and the precision of motorized positioning stages. However, compared to digital slide scanner, the primary challenge lies in the sample preparation process for unstained slices used in multiphoton imaging. This process needs further optimization, such as adjusting slice thickness and adhering to sealing standards. Therefore, it is essential to standardize the quality of label-free slices based on objective working distance or excitation power of laser. This standardization is a prerequisite for improving the imaging throughput of multiphoton instruments. On the other hand, high-throughput simultaneously presents challenges in data transfer and storage. Currently, there are variations in data formats of commercial microscopes from different companies. Consequently, standardizing the multiphoton file formats for specific image compression protocols not only facilitates large-scale data storage but also promotes image sharing and consultation among pathologists.
Miniaturized and portable design
Research-grade multiphoton imaging instruments typically feature multifunctional characteristics, such as tunable femtosecond lasers, high-resolution spectrometers, and directly viewable eyepieces. However, some of features may be redundant for clinical applications. Therefore, multiphoton microscopes tailored for clinical pathology need to simplify the functionalities of research-grade desktop multiphoton microscopes. This simplification not only enhances system usability but also effectively reduces the overall system size, weight, and manufacturing costs. It also makes multiphoton pathology microscopes more affordable and accessible to a broader range of medical centers and researchers, promoting their global adoption. Furthermore, miniaturized multiphoton pathology microscopes offer enhanced mobility and flexibility. In contrast to large-scale research-grade equipment, portable multiphoton microscopes do not require specific environments like cleanrooms for operation. This makes them more suitable for various clinical environments and applications, such as postoperative diagnosis in pathology departments, rapid intraoperative diagnosis in operating rooms, and bedside diagnosis in hospital wards. Notably, compared to conventional optical microscopes in pathology departments, the expense of miniaturized multiphoton microscopes remains considerable. This is primarily attributed to the costs associated with precision equipment, including femtosecond lasers, high numerical aperture objectives, and photomultiplier tubes. Therefore, although miniaturization facilitates the clinical integration of MPM, the substantial upfront investment and ongoing maintenance expenses frequently influence hospitals’ procurement decisions. Such factors may impede the widespread adoption and collaborative utilization of MPM within medical institutions, ultimately diminishing its overall equipment utilization rates.
Multi-modality functionality
In clinical decision-making, the comprehensive utilization of multi-modal information is crucial for a more holistic understanding of diseases, encompassing clinical data and the combination of different imaging modalities such as radiology and pathology. For multiphoton microscopes, in addition to the four nonlinear optical effects (SHG, THG, 2PEF, 3PEF), SRS and coherent anti-stokes Raman scattering (CARS) also exhibits high specificity for different types of biomolecules. The integration of MPM and SRS/CARS in multimodal microscopy enables a more comprehensive characterization of the distribution of pathological features in tissues143. This not only aids in the discovery of novel pathological markers but also provides insights into disease mechanisms. On the other hand, H&E staining are one of the most used diagnostic tools in pathology. Encouragingly, H&E-stained specimen can also be excited to produce multiphoton signals. Therefore, if H&E-stained imaging can be integrated with multiphoton imaging at the image or instrument level, it not only provides more comprehensive information during diagnosis but also enhances the reliability and accuracy of pathological diagnosis. Importantly, the integration of H&E staining establishes a more solid foundation for the widespread application of multiphoton pathology instruments in clinical settings.
Task-oriented high-quality open-source multiphoton datasets
Focusing on specific clinical tasks
As multiphoton instruments are not yet widely employed in clinical pathology, the current scale of multiphoton image datasets is far smaller than that of digital pathology datasets. However, the effectiveness and utility of datasets are prerequisites for expanding dataset scale. To harness the unique advantages of multiphoton pathology diagnosis, it is imperative to establish task-oriented multiphoton pathology datasets, such as those for distinguishing brain tumors from pituitary tumors. Driven by specific clinical tasks, surgeons, pathologists, microscopists, and computer engineers need to collaboratively plan inclusion criteria, case numbers, image dimensions, annotation rules from the early stages of model development. This collaboration is essential to avoid biases that may impact model training. These task-oriented multiphoton datasets not only attract computer vision researchers to improve model metrics, but also draw more attention from clinical practitioners to the auxiliary diagnostic potential of MPM.
Image quality of dataset
Due to influences from factors such as photomultiplier gain, laser power, and sample preparation quality, even the same tissue slices may exhibit resolution and color discrepancies in images scanned by different multiphoton instruments. Such differences in image quality pose challenges to the transferability of the same model between two seemingly similar multiphoton datasets. Despite the development of some style normalization or style transfer models, these models often achieve optimal performance only on specific datasets. Therefore, AI-assisted multiphoton pathology diagnosis should emphasize the rationalization of imaging parameters, standardization of imaging processes and specimen preparation. By exploring and establishing a consensus on the entire process from specimen to imaging, we may be able to control the quality of multiphoton image data from the source, thus addressing the generalization gap caused by inherent heterogeneity in histopathological data.
Open sourcing and sharing of dataset
Currently, acquiring multiphoton image still faces challenges, primarily due to the high academic value of multiphoton datasets and legal or ethical constraints involving human samples. It is worth noting that the rapid development of computer vision is closely related to the open-source and large-scale natural image datasets. To further propel the impact of multiphoton-assisted diagnosis, high-quality work should proactively release the datasets required for training models as much as possible, especially training data. This will prevent researchers from overestimating the performance of the models. Furthermore, to promote the sharing of large-scale datasets, we need to establish a network platform supporting online preview and download of multiphoton image data. This platform should include raw data, corresponding pathological images, dataset descriptions, and task instructions. On the other hand, to address challenges in sharing data when constructing multicenter datasets across different countries due to ethical and regulatory obstacles, federated learning and swarm learning can be attempted to jointly train the models. Federated learning allows multiple institutions to collaboratively improve a global model while preserving the confidentiality of their individual data sets. In parallel, swarm learning enhances prediction accuracy and robustness by integrating diverse models. This approach effectively mitigates overfitting and enhances the model’s generalization capabilities.
Custom-developed multiphoton deep-learned diagnostic tool
Transitioning from supervised to unsupervised training paradigm
Supervised, unsupervised, and semi-supervised learning are the three main training paradigms in deep learning. Supervised learning relies on experts annotating multiphoton images, but obtaining paired ground truth can be challenging. Computational constraints often lead to training gigapixel or terapixel-level images with annotated patches, which is time-consuming and expensive. Moreover, models trained on a single dataset usually lack strong generalization. Self-supervised learning addresses this by designing supervision tasks that transform unsupervised learning into a supervised problem without requiring manual annotations, while semi-supervised learning leverages a small amount of labeled data alongside unlabeled data to reduce dependency on extensive labeling. In segmentation tasks, a self-supervised domain adaptation framework, based on target-specific fine-tuning, adapts the original model to different target-specific pathological tissues for cell segmentation. This domain adaptation occurs across various tissues and multiple medical centers without accessing the source dataset, enhancing the model’s performance even with minimal labeled data144. Additionally, a semi-supervised semantic segmentation network, SCANet, based on a three-branch architecture, alternately trains a multi-scale recurrent neural network branch, a consistency decoder branch, and an adversarial learning branch. This achieves excellent segmentation performance with a small amount of labeled data and extensive unlabeled data145.
Weakly supervised learning harnesses imprecise or incomplete weak label information to train models, mapping input data to stronger labels, thereby reducing reliance on precise annotations. In classification tasks, a weakly supervised learning framework using Information Bottleneck theory fine-tunes the backbone to create task-specific representations from WSI-level weak labels, addressing the limited annotation issue in pathological image classification146. Similarly, another weakly supervised learning framework based on RankMix data augmentation, adapts sample quantities in the training set according to task contributions and mix images of different sizes, mitigating issues of data scarcity and class imbalance147. Ultimately, self-supervised or weakly supervised learning holds promise in addressing challenges such as inadequate generalization, data scarcity, and insufficient labeled data in multiphoton pathology models.
Model architecture for general intelligence
Model performance metrics reflect their ability to perform tasks on specific datasets. From a diagnostic perspective, pathologists are equally concerned about the intelligence of the model’s adaptability and handling of boundary conditions. Firstly, introducing advanced model architecture is crucial for the future of multiphoton intelligent diagnostics. Unlike CNN models trained on small patient cohorts, combining a pre-trained encoder with a transformer network for patch aggregation has been validated for end-to-end biomarker prediction on a large multicenter cohort of over 13,000 colorectal cancer patients148. On the other hand, the process of multiphoton imaging is interpretable, where the pixel intensity in the image represents the spectral characteristics of endogenous fluorescence signal sources. Thus, incorporating the physical principles of MPM into the model ensures more effective capture of endogenous information, potentially revolutionizing the interpretation of multiphoton data and enhancing both generalizability and efficiency.
Secondly, a single modality often fails to fully reveal the complex mechanisms and diversity of diseases, medical centers have established multidisciplinary teams for the clinical treatment of major illnesses. Moreover, molecular pathology laboratories equipped with technologies such as genetic testing, protein analysis, and fluorescence imaging are increasingly demonstrating their capacity for precise diagnosis. Therefore, AI models that integrate multimodal data can provide comprehensive and scientifically sound diagnostic decisions. The histological and genomic features are extracted using a multiple instance learning network and a self-normalizing network, followed by feature fusion through Kronecker product integration to achieve cancer prognosis prediction149. The iStar model, which is based on hierarchical image feature extraction, combines spatial transcriptomics data with high-resolution histological images to predict super-resolution spatial gene expression150. Pan-cancer computational histopathology represents image tile as 1536-dimensional vectors and uses high-dimensional regression methods to integrate histological, genomic, and transcriptomic features, accurately discriminates 28 cancer and 14 normal tissue types151. As a result, incorporating multiphoton image features into the multimodal AI models has the potential to offer unique new perspectives on the interactions between cells and the extracellular matrix within the tumor microenvironment.
Finally, foundational models like ChatGPT in natural language processing demonstrate capabilities for general intelligence, facilitating the development of multiphoton diagnostic models with multitasking abilities. Future advancements will enable tasks such as transforming between multiphoton and H&E images, interpreting multiphoton images alongside pathological reports, and engaging in iterative question-and-answer sessions involving pathological findings and doctor-patient interactions152. A more challenging prospect is transforming multiphoton microscopes into intelligent entities through specialized models, allowing interaction with high-throughput images in clinical pathology diagnostics. This embodied intelligent learning paradigm will ultimately lead to new emergences in MPM diagnostic capabilities, providing opportunities to construct a general intelligent model adaptive to diseases.
Interpretability, repeatability and reliability
Although some multiphoton diagnostic models perform exceptionally well on datasets, even matching or surpassing human diagnostic tasks, the primary hurdle in clinical application is the “black-box” nature of deep learning, i.e., lack of interpretability. Pathologists express concern about writing diagnostic reports when they lack an understanding of how the model reaches its conclusions. Despite interpretability of neural networks has been a long-standing challenge, the methods like feature visualization could provide an approximate explanation of the model’s working process. These visualization results enhance pathologists’ trust in model-assisted decision-making. On the other hand, the reliability of auxiliary diagnostics is also reflected in the model’s repeatability. In the field of multiphoton medicine, while extensive work has been done on medical statistical analysis or model ablation experiments, open-source code contributions are limited. For the open-source work, the training weights of the model are particularly crucial for reproducing results. Therefore, if we verify the repeatability of the model through sufficient code access privileges and data resources, thereby providing confidence intervals, capability boundaries, and computational consumption. This will increase the reliability of the model for clinical deployment. However, achieving breakthroughs in AI interpretability poses significant challenges in the short term. If guided by outcome-driven assessments of model feasibility, clinical validation of deep learning methods emerges as a crucial pathway to enhancing AI reliability, particularly in healthcare settings. For instance, within large-scale multicenter trials employing AI-empowered MPM, despite lingering uncertainties regarding the interpretability of the models, the accuracy metrics of diagnostic tasks serve as robust indicators of their reliability and stability. This, in turn, will also enhance patient acceptance of this novel technology.
The clinical workflow of integrated multiphoton pathology
Multiphoton pathological diagnostic criteria
Pathologists, drawing upon years of accumulated knowledge and experience, have established standard criteria for conventional pathological diagnosis. Even though MPM has demonstrated a series of advancements in pathological applications, firstly, it is essential to establish atlases tailored to multiphoton diagnosis. These atlases should elucidate the diverse applications of multiphoton images across various pathological scenarios. They ought to encompass multiphoton images alongside corresponding images of fresh tissue, frozen sections, paraffin-embedded sections, smears, and organoids for comparative reference. Such comprehensive coverage will assist pathologists in gaining deeper insights into MPM indications and serve as an introductory guide for computer vision researchers exploring multiphoton imaging. Secondly, pathologists typically have minimal or no training in the use of multiphoton-assisted diagnostic technologies. To facilitate a rapid understanding of multiphoton images by pathologists, a virtual staining model can be employed. Multiphoton images can be transformed into virtual H&E images, special stains, and even holds the potential for conversion into immunohistochemistry or immunofluorescence images153. This capability allows pathologists to engage in paired comparative learning, assisting them in gradually incorporating multiphoton features into their diagnostic workflow. With the growing trust among pathologists in multiphoton diagnostics, multi-center clinicians can continuously validate and explore new multiphoton features in clinical practice. This iterative process allows for the enhancement of multiphoton diagnostic capabilities across various medical settings. Finally, combined with efficient AI analysis, this approach can further aid in formulating comprehensive pathology workflows and improving diagnostic precision. For instance, Pohlkamp et al. investigated the use of machine learning to support microscopic differential counts of peripheral blood smears within a high-throughput hematology laboratory setting154. Nasrallah et al. utilized machine learning for cryosection pathology to predict the 2021 WHO classification of glioma155. As multiphoton diagnostic methods achieve consensus, it is anticipated that clinicians and imaging experts will collaboratively integrate multiphoton diagnostic features into clinical diagnostic guidelines or novel histological grading systems for specific diseases.
Multiphoton pathological diagnosis platform
The prospect of multiphoton AI-assisted diagnostic algorithms is exciting for pathologists. However, pathologists typically a background in computer science, and reproducing algorithms or configuring environments can be labor-intensive for them. In the pathology diagnostic workflow, pathologists prefer “plug-and-play” intelligent diagnostic software for decision support. Therefore, there is an urgent need to integrate mature multiphoton diagnostic algorithms into pathology diagnostic systems, such as picture archiving and communication system, in the form of software packages. Pathologists can seamlessly import MPM-based diagnostic results into conventional diagnostic reports, facilitating easier integration into existing diagnostic workflows. Additionally, using cloud-based interactions, pathologists can collaboratively assess this novel multiphoton pathology report with colleagues, with final decisions made by senior pathologists. For less mature models, there is a need for a research-level specialized diagnostic platform for multiphoton images, similar to DeepImageJ156. This platform should deploy and fine-tune pre-trained deep learning models, creating a library of multiphoton diagnostic algorithms. Based on various postoperative or intraoperative diagnostic requirements, algorithms from the platform can be selectively deployed to edge or cloud servers. Besides, a feedback mechanism should be incorporated into the software process to iteratively optimize the diagnostic performance of algorithms in clinical trials.
Ethical security and AI risks
While interdisciplinary personnel have considered ethical security concerns in constructing multiphoton image datasets, multiphoton images involve patient privacy information. In the data management and analysis processes of multiphoton diagnostic software, apart from pathologists, it may also involve bioinformatics, statistics, and computer vision researchers. This may inadvertently lead to risks related to personal privacy or the illegal utilization of data. To address these issues, it is essential to establish privacy protection mechanisms and data protection regulations concerning multiphoton-related data. Beyond ethical security, AI risks also demand attention. Data poisoning and adversarial sample attacks are common methods that threaten the security of the model. Data poisoning involves injecting malicious samples or altering data in the training set to deceive the model, leading to incorrect predictions in future tasks. Adversarial attacks, on the other hand, subtly but purposefully modify input data to cause the model to produce incorrect results. The augmentation of multiphoton data significantly enhances the model’s inference capabilities. However, this increased capability may introduce false features that clinicians find challenging to identify. For instance, the results of virtual staining, without a ground truth for comparison, already poses challenges for pathologists in distinguishing between authentic and synthetic information. This uncertainty can introduce decision biases for pathologists, with the impact on patient prognosis difficult to estimate. To mitigate these AI risks and reduce uncertainty in diagnosis, perhaps there is no need to blindly pursue the innovation and performance of the model; instead, emphasis should be placed on the practicality and stability of the model. Additionally, multiphoton diagnostic software requires rigorous clinical validation and regulatory approval. Through randomized clinical trials, it can determine the role of multiphoton diagnostic algorithms in the entire diagnostic workflow. This ensures the provision of more reliable, controllable, and secure diagnostic results.
Conclusion
Despite the hurdles in progressing multiphoton microscopy (MPM) from traditional pathological uses to intelligent diagnostics, the movement toward smart multiphoton pathology is actively underway. As multiphoton pathology tools evolve and the collection of relevant datasets grows, we anticipate a marked enhancement in both the breadth and depth of artificial intelligence applications within this field. Pathologists are beginning to grasp the enhanced capabilities offered by multiphoton technology. However, it is important to stress that the successful implementation of such sophisticated technology hinges on synchronized collaborative efforts from diverse, interdisciplinary teams across multiple centers. This cooperation is vital for turning scientific discoveries into actionable diagnostic criteria, for refining early-stage prototypes into approved medical devices, and for evolving open-source algorithms into accessible, user-centered software interfaces. With these concerted efforts, MPM is poised to become a cornerstone in the future landscape of diagnostic pathology.
References
Liu, J. T. C. et al. Nondestructive 3D pathology with light-sheet fluorescence microscopy for translational research and clinical assays. Annu. Rev. Anal. Chem. 16, 231–252 (2023).
Ghaznavi, F. et al. Digital imaging in pathology: whole-slide imaging and beyond. Annu. Rev. Pathol. Mechanisms Dis. 8, 331–359 (2013).
Peng, L. T., Zhu, C. L. & Bian, L. H. U-shape transformer for underwater image enhancement. IEEE Trans. Image Process. 32, 3066–3079 (2023).
Wu, Z. et al. Three-dimensional nanoscale reduced-angle ptycho-tomographic imaging with deep learning (RAPID). eLight 3, 7 (2023).
Lin, H. & Cheng, J. X. Computational coherent Raman scattering imaging: breaking physical barriers by fusion of advanced instrumentation and data science. eLight 3, 6 (2023).
Zhang, Y. X. et al. Single-source domain expansion network for cross-scene hyperspectral image classification. IEEE Trans. Image Process. 32, 1498–1512 (2023).
Zou, Z. X. et al. Object detection in 20 years: a survey. Proc. IEEE 111, 257–276 (2023).
Lee, M. et al. Unsupervised video object segmentation via prototype memory network. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 5913–5923 (IEEE, 2023).
Gehrung, M. et al. Triage-driven diagnosis of Barrett’s esophagus for early detection of esophageal adenocarcinoma using deep learning. Nat. Med. 27, 833–841 (2021).
Amgad, M. et al. A population-level digital histologic biomarker for enhanced prognosis of invasive breast cancer. Nat. Med. 30, 85–97 (2024).
Lu, M. Y. et al. AI-based pathology predicts origins for cancers of unknown primary. Nature 594, 106–110 (2021).
Myung, K. & Kim Phase microscopy and surface profilometry by digital holography. Light.: Adv. Manuf. 3, 19 (2022).
Utadiya, Subhash et al. Integrated self-referencing single shot digital holographic microscope and optical tweezer. Light.: Adv. Manuf. 3, 37 (2022).
Gao, Yunhui & Cao, Liangcai Iterative projection meets sparsity regularization: towards practical single-shot quantitative phase imaging with in-line holography. Light.: Adv. Manuf. 4, 6 (2023).
Li, Yuhang et al. Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network. Light.: Adv. Manuf. 4, 19 (2023).
Zvagelsky, Roman et al. Towards in-situ diagnostics of multi-photon 3D laser printing using optical coherence tomography. Light.: Adv. Manuf. 3, 39 (2022).
Wang, S. et al. Resection-inspired histopathological diagnosis of cerebral cavernous malformations using quantitative multiphoton microscopy. Theranostics 12, 6595–6610 (2022).
Zhang, G. Y. et al. Clinically relevant deep learning for detection and quantification of geographic atrophy from optical coherence tomography: a model development and external validation study. Lancet Digital Health 3, e665–e675 (2021).
Zhao, J. H. et al. PhaseFIT: live-organoid phase-fluorescent image transformation via generative AI. Light Sci. Appl. 12, 297 (2023).
Cao, R. et al. Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy. Nat. Biomed. Eng. 7, 124–134 (2023).
Hollon, T. C. et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nat. Med. 26, 52–58 (2020).
Chen, J. H. et al. Prognostic value of tumor necrosis based on the evaluation of frequency in invasive breast cancer. BMC Cancer 23, 530 (2023).
Xi, G. Q. et al. Large-scale tumor-associated collagen signatures identify high-risk breast cancer patients. Theranostics 11, 3229–3243 (2021).
He, J. J. et al. Prognostic value of tumour-infiltrating lymphocytes based on the evaluation of frequency in patients with oestrogen receptor-positive breast cancer. Eur. J. Cancer 154, 217–226 (2021).
Provenzano, P. P. et al. Collagen reorganization at the tumor-stromal interface facilitates local invasion. BMC Med. 4, 38 (2006).
Chen, D. X. et al. Predicting postoperative peritoneal metastasis in gastric cancer with serosal invasion using a collagen nomogram. Nat. Commun. 12, 179 (2021).
Chen, D. X. et al. Association of the collagen signature in the tumor microenvironment with lymph node metastasis in early gastric cancer. JAMA Surg. 154, e185249 (2019).
Wang, Q. Q. et al. Differentiating morphea from lichen sclerosus by using multiphoton microscopy combined with U-Net model for elastic fiber segmentation. J. Biophotonics 16, e202300078 (2023).
Cai, S. J. et al. Dense-UNet: a novel multiphoton in vivo cellular image segmentation model based on a convolutional neural network. Quant. Imaging Med. Surg. 10, 1275–1285 (2020).
Terradillos, E. et al. Analysis on the characterization of multiphoton microscopy images for malignant neoplastic colon lesion detection under deep learning methods. J. Pathol. Inform. 12, 27 (2021).
Wen, B. L. et al. Texture analysis applied to second harmonic generation image data for ovarian cancer classification. J. Biomed. Opt. 19, 096007 (2014).
Wen, B. et al. 3D texture analysis for classification of second harmonic generation images of human ovarian cancer. Sci. Rep. 6, 35734 (2016).
Kistenev, Y. V. et al. Application of multiphoton imaging and machine learning to lymphedema tissue analysis. Biomed. Opt. Express 10, 3353–3368 (2019).
Wang, G. X. et al. Automated ovarian cancer identification using end-to-end deep learning and second harmonic generation imaging. IEEE J. Sel. Top. Quantum Electron. 29, 7200609 (2023).
Lin, H. X. et al. Automated classification of hepatocellular carcinoma differentiation using multiphoton microscopy and deep learning. J. Biophotonics 12, e201800435 (2019).
Yang, Q. Q. et al. Epithelium segmentation and automated gleason grading of prostate cancer via deep learning in label-free multiphoton microscopic images. J. Biophotonics 13, e201900203 (2020).
Huang, X. X. et al. Detection of fibrotic changes in the progression of liver diseases by label-free multiphoton imaging. J. Biophotonics 16, e202300153 (2023).
Xi, G. Q. et al. Automated classification of breast cancer histologic grade using multiphoton microscopy and generative adversarial networks. J. Phys. D: Appl. Phys. 56, 015401 (2023).
Meng, J. et al. Mapping variation of extracellular matrix in human keloid scar by label-free multiphoton imaging and machine learning. J. Biomed. Opt. 28, 045001 (2023).
Blokker, M. et al. Fast intraoperative histology-based diagnosis of gliomas with third harmonic generation microscopy and deep learning. Sci. Rep. 12, 11334 (2022).
You, S. X. et al. Real-time intraoperative diagnosis by deep neural network driven multiphoton virtual histology. npj Precis. Oncol. 3, 33 (2019).
Qiu, L. D. et al. Intratumor graph neural network recovers hidden prognostic value of multi-biomarker spatial heterogeneity. Nat. Commun. 13, 4250 (2022).
Fang, N. et al. Automatic and label-free identification of blood vessels in gliomas using the combination of multiphoton microscopy and image analysis. J. Biophotonics 12, e201900006 (2019).
Mehidine, H. et al. Multimodal imaging to explore endogenous fluorescence of fresh and fixed human healthy and tumor brain tissues. J. Biophotonics 12, e201800178 (2019).
Fang, N. et al. Rapid, label-free detection of intracranial germinoma using multiphoton microscopy. Neurophotonics 6, 035014 (2019).
Fang, N. et al. A pilot study of using multiphoton microscopy to diagnose schwannoma. J. Phys. D: Appl. Phys. 52, 415401 (2019).
Lin, P. H. et al. Diagnosing pituitary adenoma in unstained sections based on multiphoton microscopy. Pituitary 21, 362–370 (2018).
He, J. J. et al. Label-free detection of invasive micropapillary carcinoma of the breast using multiphoton microscopy. J. Biophotonics 16, e202200224 (2023).
Han, Z. H. et al. Detection of pathological response of axillary lymph node metastasis after neoadjuvant chemotherapy in breast cancer using multiphoton microscopy. J. Biophotonics 16, e202200274 (2023).
Gavgiotaki, E. et al. Third Harmonic Generation microscopy distinguishes malignant cell grade in human breast tissue biopsies. Sci. Rep. 10, 11055 (2020).
Shen, T. F. et al. Monitoring the progression of lobular breast carcinoma using multiphoton microscopy. Laser Phys. Lett. 16, 105601 (2019).
Sun, Y. et al. Intraoperative visualization of the tumor microenvironment and quantification of extracellular vesicles by label-free nonlinear imaging. Sci. Adv. 4, eaau5603 (2018).
Nie, Y. T. et al. Differentiating the two main histologic categories of fibroadenoma tissue from normal breast tissue by using multiphoton microscopy. J. Microsc. 258, 79–85 (2015).
Matsui, T. et al. Non-labeling multiphoton excitation microscopy as a novel diagnostic tool for discriminating normal tissue and colorectal cancer lesions. Sci. Rep. 7, 6959 (2017).
Li, L. H. et al. Visualization of tumor response to neoadjuvant therapy for rectal carcinoma by nonlinear optical imaging. IEEE J. Sel. Top. Quantum Electron. 22, 158–163 (2016).
Li, L. H. et al. Detection of morphologic alterations in rectal carcinoma following preoperative radiochemotherapy based on multiphoton microscopy imaging. BMC Cancer 15, 142 (2015).
Yan, J. et al. Real-time optical diagnosis for surgical margin in low rectal cancer using multiphoton microscopy. Surgical Endosc. 28, 36–41 (2014).
Methods in Brief. Label-free histopathology. Nat. Methods 13, 815 (2016).
Huang, S. H., Heikal, A. A. & Webb, W. W. Two-photon fluorescence spectroscopy and microscopy of NAD(P)H and flavoprotein. Biophysical J. 82, 2811–2825 (2002).
Chen, X. Y. et al. Second harmonic generation microscopy for quantitative analysis of collagen fibrillar structure. Nat. Protoc. 7, 654–669 (2012).
Van Steenbergen, V. et al. Molecular understanding of label-free second harmonic imaging of microtubules. Nat. Commun. 10, 3530 (2019).
Nucciotti, V. et al. Probing myosin structural conformation in vivo by second-harmonic generation microscopy. Proc. Natl Acad. Sci. USA 107, 7763–7768 (2010).
Zipfel, W. R. et al. Live tissue intrinsic emission microscopy using multiphoton-excited native fluorescence and second harmonic generation. Proc. Natl Acad. Sci. USA 100, 7075–7080 (2003).
Monici, M. Cell and tissue autofluorescence research and diagnostic applications. Biotechnol. Annu. Rev. 11, 227–256 (2005).
Denk, W., Strickler, J. H. & Webb, W. W. Two-photon laser scanning fluorescence microscopy. Science 248, 73–76 (1990).
Petersson, P. Two-Photon Excited Laser Scanning Confocal Microscopy. Lund Reports on Atomic Physics LRAP-226 (1997).
Zong, W. J. et al. Fast high-resolution miniature two-photon microscopy for brain imaging in freely behaving mice. Nat. Methods 14, 713–719 (2017).
Wei, M. J. Space station two-photon microscope. Preprint at https://news.cgtn.com/news/2023-04-14/China-Space-Station-achieves-100-regeneration-of-oxygen-resources-1iZpNmOwR4k/index.html (2023).
Masters, B. R., So, P. T. & Gratton, E. Multiphoton excitation fluorescence microscopy and spectroscopy of in vivo human skin. Biophysical J. 72, 2405–2412 (1997).
Cox, G. et al. 3-Dimensional imaging of collagen using second harmonic generation. J. Struct. Biol. 141, 53–62 (2003).
Tu, H. H. et al. Concurrence of extracellular vesicle enrichment and metabolic switch visualized label-free in the tumor microenvironment. Sci. Adv. 3, e1600675 (2017).
Borhani, N. et al. Digital staining through the application of deep neural networks to multi-modal multi-photon microscopy. Biomed. Opt. Express 10, 1339–1350 (2019).
Shen, B. L. et al. Deep learning autofluorescence-harmonic microscopy. Light Sci. Appl. 11, 76 (2022).
Göppert-Mayer, M. Über elementarakte mit zwei quantensprüngen. Ann. der Phys. 401, 273–294 (1931).
Kaiser, W. & Garrett, C. G. B. Two-photon excitation in CaF2: Eu2+. Phys. Rev. Lett. 7, 229–231 (1961).
Hellwarth, R. & Christensen, P. Nonlinear optical microscopic examination of structure in polycrystalline ZnSe. Opt. Commun. 12, 318–322 (1974).
Huff, J. The Fast mode for Zeiss LSM 880 with Airyscan: high-speed confocal imaging with super-resolution and improved signal-to-noise ratio. Nature Methods 13, i–ii (2016).
Chang, L. Nikon’s large-format multiphoton system for intravital imaging. Nat. Methods 12, iii–iv (2015).
Yan, J. et al. Preclinical study of using multiphoton microscopy to diagnose liver cancer and differentiate benign and malignant liver lesions. J. Biomed. Opt. 17, 026004 (2012).
Wu, Y. et al. Identifying three different architectural subtypes of mammary ductal carcinoma in situ using multiphoton microscopy. J. Phys. D: Appl. Phys. 48, 405401 (2015).
Fang, N. et al. Quantitative assessment of microenvironment characteristics and metabolic activity in glioma via multiphoton microscopy. J. Biophotonics 12, e201900136 (2019).
Han, X. H. et al. Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: an innovative method with H&E-stained and multiphoton microscopy images. Int. J. Cancer 154, 1802–1813 (2024).
Zhang, H. et al. Optical biopsy of laryngeal lesions using femtosecond multiphoton microscopy. Biomed. Opt. Express 12, 1308–1319 (2021).
Zeng, Y. P. et al. Intraoperative assisting diagnosis of esophageal submucosal cancer using multiphoton microscopy. Laser Phys. Lett. 15, 075603 (2018).
Wang, S. et al. Label-free detection of the architectural feature of blood vessels in glioblastoma based on multiphoton microscopy. IEEE J. Sel. Top. Quantum Electron. 27, 7200907 (2021).
Fang, N. et al. Label-free detection of brain invasion in meningiomas by multiphoton microscopy. Laser Phys. Lett. 16, 015603 (2019).
Jain, M. et al. Multiphoton microscopy: a potential “optical biopsy” tool for real-time evaluation of lung tumors without the need for exogenous contrast agents. Arch. Pathol. Lab. Med. 138, 1037–1047 (2014).
Jain, M. et al. Multiphoton microscopy: a potential intraoperative tool for the detection of carcinoma in situ in human bladder. Arch. Pathol. Lab. Med. 139, 796–804 (2015).
Yan, J. et al. Real-time optical diagnosis of gastric cancer with serosal invasion using multiphoton imaging. Sci. Rep. 6, 31004 (2016).
Jain, M. et al. Exploring multiphoton microscopy as a novel tool to differentiate chromophobe renal cell carcinoma from oncocytoma in fixed tissue sections. Arch. Pathol. Lab. Med. 142, 383–390 (2018).
Xu, J. et al. Multiphoton microscopy for label-free identification of intramural metastasis in human esophageal squamous cell carcinoma. Biomed. Opt. Express 8, 3360–3368 (2017).
Xu, J. et al. Identifying the neck margin status of ductal adenocarcinoma in the pancreatic head by multiphoton microscopy. Sci. Rep. 7, 4586 (2017).
Ling, Y. T. et al. Second harmonic generation (SHG) imaging of cancer heterogeneity in ultrasound guided biopsies of prostate in men suspected with prostate cancer. J. Biophotonics 10, 911–918 (2017).
Pouli, D. et al. Two-photon images reveal unique texture features for label-free identification of ovarian cancer peritoneal metastases. Biomed. Opt. Express 10, 4479–4488 (2019).
Zhan, H. L. et al. Identification of the tumor boundary of hilar cholangiocarcinoma based on multiphoton microscopy. IEEE Photonics J. 14, 4050107 (2022).
Pukaluk, A. et al. Changes in the microstructure of the human aortic medial layer under biaxial loading investigated by multi-photon microscopy. Acta Biomaterialia 151, 396–413 (2022).
Yang, Y. L. et al. Multiphoton microscopy providing pathological-level quantification of myocardial fibrosis in transplanted human heart. Lasers Med. Sci. 37, 2889–2898 (2022).
Xi, G. Q. et al. Label-free imaging of blood vessels in human normal breast and breast tumor tissue using multiphoton microscopy. Scanning 2019, 5192875 (2019).
Liu, Y. L. et al. Quantitative analysis of collagen morphology in breast cancer from millimeter scale using multiphoton microscopy. J. Innovative Optical Health Sci. 16, 2243003 (2023).
Xi, G. Q. et al. Rapid label-free detection of early-stage lung adenocarcinoma and tumor boundary via multiphoton microscopy. J. Biophotonics 16, e202300172 (2023).
Golaraei, A. et al. Polarimetric second-harmonic generation microscopy of the hierarchical structure of collagen in stage I-III non-small cell lung carcinoma. Biomed. Opt. Express 11, 1851–1863 (2020).
Li, L. H. et al. Label-free identification of early gastrointestinal neuroendocrine tumors via biomedical multiphoton microscopy and automatic image analysis. IEEE Access 8, 105681–105689 (2020).
Zheng, X. L. et al. Margin diagnosis for endoscopic submucosal dissection of early gastric cancer using multiphoton microscopy. Surgical Endosc. 34, 408–416 (2020).
Lin, H. X. et al. Label-free classification of hepatocellular-carcinoma grading using second harmonic generation microscopy. Biomed. Opt. Express 9, 3783–3793 (2018).
Chen, J. et al. Optical characterization of lesions and identification of surgical margins in pancreatic metastasis from renal cell carcinoma by using two-photon excited fluorescence microscopy. Laser Phys. 24, 115603 (2014).
Chen, Y. T. et al. Multiphoton microscopy as a diagnostic imaging modality for pancreatic neoplasms without hematoxylin and eosin stains. J. Biomed. Opt. 19, 96008 (2014).
Qian, S. H. et al. Identification of human ovarian cancer relying on collagen fiber coverage features by quantitative second harmonic generation imaging. Opt. Express 30, 25718–25733 (2022).
Pouli, D. et al. Label-free, high-resolution optical metabolic imaging of human cervical precancers reveals potential for intraepithelial neoplasia diagnosis. Cell Rep. Med. 1, 100017 (2020).
Pradère, B. et al. Two-photon optical imaging, spectral and fluorescence lifetime analysis to discriminate urothelial carcinoma grades. J. Biophotonics 11, e201800065 (2018).
Jain, M. et al. Multiphoton microscopy in the evaluation of human bladder biopsies. Arch. Pathol. Lab. Med. 136, 517–526 (2012).
Huland, D. M. et al. Multiphoton gradient index endoscopy for evaluation of diseased human prostatic tissue ex vivo. J. Biomed. Opt. 19, 116011 (2014).
Huttunen, M. J. et al. Multiphoton microscopy of the dermoepidermal junction and automated identification of dysplastic tissues with deep learning. Biomed. Opt. Express 11, 186–199 (2020).
Arginelli, F. et al. High resolution diagnosis of common nevi by multiphoton laser tomography and fluorescence lifetime imaging. Ski. Res. Technol. 19, 194–204 (2013).
Seidenari, S. et al. Diagnosis of BCC by multiphoton laser tomography. Ski. Res. Technol. 19, e297–e304 (2013).
Wang, S. et al. Optical visualization of cerebral cortex by label-free multiphoton microscopy. IEEE J. Sel. Top. Quantum Electron. 25, 6800508 (2019).
Batista, A. et al. High-resolution, label-free two-photon imaging of diseased human corneas. J. Biomed. Opt. 23, 036002 (2018).
Jain, M. et al. A component-by-component characterisation of high-risk atherosclerotic plaques by multiphoton microscopic imaging. J. Microsc. 268, 39–44 (2017).
Wong, S. et al. Evaluation of barrett esophagus by multiphoton microscopy. Arch. Pathol. Lab. Med. 138, 204–212 (2014).
Kottmann, R. M. et al. Second harmonic generation microscopy reveals altered collagen microstructure in usual interstitial pneumonia versus healthy lung. Respiratory Res. 16, 61 (2015).
Tilbury, K. et al. Second harmonic generation microscopy analysis of extracellular matrix changes in human idiopathic pulmonary fibrosis. J. Biomed. Opt. 19, 086014 (2014).
Tjin, G. et al. Quantification of collagen I in airway tissues using second harmonic generation. J. Biomed. Opt. 19, 036005 (2014).
Zhang, R. L. et al. Label-free identification of human coronary atherosclerotic plaque based on a three-dimensional quantitative assessment of multiphoton microscopy images. Biomed. Opt. Express 12, 2979–2995 (2021).
Goh, G. B. B. et al. Quantification of hepatic steatosis in chronic liver disease using novel automated method of second harmonic generation and two-photon excited fluorescence. Sci. Rep. 9, 2975 (2019).
Jiang, S. Y. et al. Mapping the 3D remodeling of the extracellular matrix in human hypertrophic scar by multi-parametric multiphoton imaging using endogenous contrast. Heliyon 9, e13653 (2023).
Han, Y. et al. Non-invasive imaging of pathological scars using a portable handheld two-photon microscope. Chin. Med. J. 137, 329–337 (2024).
Utino, F. L. et al. Second-harmonic generation imaging analysis can help distinguish sarcoidosis from tuberculoid leprosy. J. Biomed. Opt. 23, 126001 (2018).
Springer, S. et al. Examination of wound healing after curettage by multiphoton tomography of human skin in vivo. Ski. Res. Technol. 23, 452–458 (2017).
Huck, V. et al. From morphology to biochemical state - intravital multiphoton fluorescence lifetime imaging of inflamed human skin. Sci. Rep. 6, 22789 (2016).
Koehler, M. J. et al. Acute UVB-induced epidermal changes assessed by multiphoton laser tomography. Ski. Res. Technol. 21, 137–143 (2015).
Tong, P. L. et al. A quantitative approach to histopathological dissection of elastin-related disorders using multiphoton microscopy. Br. J. Dermatol. 169, 869–879 (2013).
Lin, L. H. et al. Visualization of dermal alteration in skin lesions with discoid lupus erythematosus by multiphoton microscopy. Laser Phys. 23, 045606 (2013).
Liu, Z. Y. et al. 3D organizational mapping of collagen fibers elucidates matrix remodeling in a hormone-sensitive 3D breast tissue model. Biomaterials 179, 96–108 (2018).
Zhou, Z. et al. Adaptive image enhancement for tracing 3D morphologies of neurons and brain vasculatures. Neuroinformatics 13, 153–166 (2015).
Ye, C. T. et al. Learned, uncertainty-driven adaptive acquisition for photon-efficient multiphoton microscopy. Preprint at https://doi.org/10.48550/arXiv.2310.16102 (2023).
Wang, S. et al. A deep learning-based stripe self-correction method for stitched microscopic images. Nat. Commun. 14, 5393 (2023).
McAleer, S. et al. Deep learning-assisted multiphoton microscopy to reduce light exposure and expedite imaging in tissues with high and low light sensitivity. Transl. Vis. Sci. Technol. 10, 30 (2021).
Lin, G. M. et al. Enhanced resnet-based super-resolution method for two-photon microscopy image. Signal, Image Video Process. 16, 2157–2163 (2022).
Zhao, Z. W. et al. Deep learning-based high-speed, large-field, and high-resolution multiphoton imaging. Biomed. Opt. Express 14, 65–80 (2023).
Picon, A. et al. Novel pixelwise co-registered hematoxylin-eosin and multiphoton microscopy image dataset for human colon lesion diagnosis. J. Pathol. Inform. 13, 100012 (2022).
Shi, J. D. et al. Weakly supervised identification of microscopic human breast cancer-related optical signatures from normal-appearing breast tissue. Biomed. Opt. Express 14, 1339–1354 (2023).
Keikhosravi, A. et al. Non-disruptive collagen characterization in clinical histopathology using cross-modality image synthesis. Commun. Biol. 3, 414 (2020).
Sugita, Y. et al. Intraoperative rapid diagnosis of primary central nervous system lymphomas: advantages and pitfalls. Neuropathology 34, 438–445 (2014).
Tu, H. H. et al. Stain-free histopathology by programmable supercontinuum pulses. Nat. Photonics 10, 534–540 (2016).
Li, Z. Y. et al. Toward source-free cross tissues histopathological cell segmentation via target-specific finetuning. IEEE Trans. Med. Imaging 42, 2666–2677 (2023).
Shen, N. et al. SCANet: a unified semi-supervised learning framework for vessel segmentation. IEEE Trans. Med. Imaging 42, 2476–2489 (2023).
Li, H. L. et al. Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7454–7463 (IEEE, 2023).
Chen, Y. C. & Lu, C. S. RankMix: data augmentation for weakly supervised learning of classifying whole slide images with diverse sizes and imbalanced categories. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 23936–23945 (IEEE, 2023).
Wagner, S. J. et al. Transformer-based biomarker prediction from colorectal cancer histology: a large-scale multicentric study. Cancer Cell 41, 1650–1661.e4 (2023).
Chen, R. J. et al. Pan-cancer integrative histology-genomic analysis via multimodal deep learning. Cancer Cell 40, 865–878.e6 (2022).
Zhang, D. W. et al. Inferring super-resolution tissue architecture by integrating spatial transcriptomics with histology. Nat. Biotechnol. https://doi.org/10.1038/s41587-023-02019-9 (2024).
Fu, Y. et al. Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis. Nat. Cancer 1, 800–810 (2020).
Lu, M. Y. et al. A multimodal generative AI Copilot for human pathology. Nature https://doi.org/10.1038/s41586-024-07618-3 (2024).
Ghahremani, P. et al. Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification. Nat. Mach. Intell. 4, 401–412 (2022).
Pohlkamp, C. et al. Machine learning (ML) can successfully support microscopic differential counts of peripheral blood smears in a high throughput hematology laboratory. Blood 136, 45–46 (2020).
Nasrallah, M. P. et al. Machine learning for cryosection pathology predicts the 2021 WHO classification of glioma. Med 4, 526–540.e4 (2023).
Gómez-de-Mariscal, E. et al. DeepImageJ: a user-friendly environment to run deep learning models in ImageJ. Nat. Methods 18, 1192–1195 (2021).
Acknowledgements
We would like to thank Lianhuang Li, Xiahui Han, Shuangmu Zhuo, Guannan Chen, Guangxing Wang, and Runlong Wu for their support in the image permissions. This work was support by the National Natural Science Foundation of China (62005049 to S.W., 62072110 to W.L., 82171991 to J.C., T2288102, 32227802, and 81925022, to L.C.), the National Science and Technology Major Project Program (2022YFC3400600 to L.C.), the Natural Science Foundation of Fujian Province (2022J01216 and 2023J011125 to J.C.). L.C. was supported by New Cornerstone Science Foundation.
Author information
Authors and Affiliations
Contributions
S.W., F.H., L.C., and J.C. designed and directed the study; R.L., X.W., and D.K. participated in discussions on medical issues; S.W. wrote the manuscript; J.P. X.Z., Y.L., and Z.L. prepared the figures and tables. S.W., J.P., X.Z., Y.L., W.L., F.H., L.C., and J.C. reviewed and revised manuscript; All the authors read and agreed to the final version of the manuscript.
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wang, S., Pan, J., Zhang, X. et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy. Light Sci Appl 13, 254 (2024). https://doi.org/10.1038/s41377-024-01597-w
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41377-024-01597-w