Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Context-aware graph deep learning for prognostic histopathology
This issue highlights the development of machine-learning models for the detection of signs of disease in external photographs of the eyes, for the optimization of the trade-off between prediction performance and feature cost in healthcare applications, for the expert-level detection of pathologies from unannotated chest X-ray images, for transforming the style of tissue images, for the fast search and retrieval of whole-slide images, for the derivation of prognostic contextual histopathological features from whole-slide images of tumours, and for the characterization of tumour microenvironments from spatial protein profiles.
The cover illustrates that graph deep learning applied to gigapixel whole-slide images of tumours can leverage information in the tumour microenvironment to derive interpretable histopathological features with prognostic value.
Graph neural networks and transformers taking advantage of contextual information and large unannotated multimodal datasets are redefining what is possible in computational medicine.
Deep-learning models trained with images of the external part of the eyes, rather than fundus images of the retina, can also be used to detect severe diabetic conditions, such as diabetic retinopathy.
Weakly supervised deep-learning models for the analysis of whole-slide images from tumour biopsies perform better at prognostic tasks if the models incorporate context from the local microenvironment.
Graph deep learning can be used to detect contextual pathological features within a complex tumour microenvironment. We have shown the use of graph deep learning for predicting the prognosis of patients with tumours, and use it to identify additional contextual prognostic biomarkers for pathologists.
Graph deep learning applied to multiplexed immunofluorescence data from tumour microenvironments reveals spatial cellular structures that are indicative of cancer prognosis.
This Review discusses the use of deep generative models, federated learning and transformer models to address challenges in the deployment of machine learning for healthcare.
This Review discusses the advantages and limitations of self-supervised methods and models for use in medicine and healthcare, and the challenges in collecting unbiased data for their training.
Deep-learning models trained on external eye photographs can detect diabetic retinopathy, diabetic macular oedema and poor blood glucose control more accurately than models relying on demographic and medical history data.
A cost-aware AI framework facilitates the development of predictive AI models that optimize the trade-off between prediction performance and feature cost.
A self-supervised model trained on chest X-ray images that lack explicit annotations performs pathology-classification tasks with accuracies comparable to those of radiologists.
A deep-learning model that transforms cryosectioned whole-slide tissue images into the style of whole-slide formalin-fixed and paraffin-embedded tissue improves the rates of accurate tumour subtyping.
A self-supervised deep-learning algorithm searches for and retrieves gigapixel whole-slide images at speeds that are independent of the size of the image repository
A graph neural network that leverages spatial protein profiles in tissue specimens to model tumour microenvironments as local subgraphs captures distinctive cellular interactions associated with differential clinical outcomes.