Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The accelerating power of machine learning in diagnosing disease and in sorting and classifying health data will empower physicians and speed-up decision making in the clinic.
Cascaded diffusion models can be used to synthesize realistic whole-slide image tiles from latent representations of RNA-sequencing data from human tumours.
A machine-learning model trained on interactions between oral drugs and intestinal drug transporters obtained by modulating their expression in intact porcine tissue can be used to predict drug–transporter and drug–drug interactions.
The inference process of medical-image classifiers can be audited by levering the expertise of physicians to identify medically meaningful features in ‘counterfactual’ images produced via generative AI.
Skin-microangiopathy phenotypes can be correlated with diabetes stage by leveraging clinically explainable morphophysiological features obtained from the analysis, via machine learning, of raster-scan optoacoustic mesoscopy images of skin on the leg.
A multiple-instance-learning model trained to encode and aggregate either the local sequence contexts or the genomic positions of somatic mutations achieved best-in-class performance in classification and prediction tasks.
Interpretable machine-learning models can identify clinical-stage monoclonal antibodies with optimal combinations of low off-target binding and low self-association in physiological and antibody-formulation conditions.
An automated plaque assay leveraging lens-free holographic imaging and deep learning rapidly and accurately detects the cell-lysing events caused by viral replication.
A transformer-based representation-learning model that processes multimodal input in a unified manner outperformed non-unified multimodal models in two clinical diagnostic tasks.
A representation-learning strategy for machine-learning models applied to medical-imaging tasks improves model robustness and training efficiency and mitigates suboptimal out-of-distribution performance.
Intratumoural heterogeneity can be characterized spatially in patients with liver metastases from colorectal cancer via phenotype-specific multi-view learning models trained with PET–MRI data from mice with subcutaneous colon tumours.
Ensembles of explainable machine-learning models increase the quality of explanations for the molecular basis of synergetic drug combinations, as shown for the treatment of acute myeloid leukaemia.
A machine learning system leveraging a vision transformer and supervised contrastive learning accurately decodes elements of intraoperative surgical activity from videos commonly collected during robotic surgeries.
A machine-learning pipeline that mines the entire space of polypeptide-chain sequences can identify potent antimicrobial peptides by integrating tasks that gradually narrow down the search space.
This Perspective overviews the sources of prediction uncertainty in machine learning for applications in healthcare, and discusses how to implement suitable prediction-uncertainty metrics.
A deep-learning model that transforms cryosectioned whole-slide tissue images into the style of whole-slide formalin-fixed and paraffin-embedded tissue improves the rates of accurate tumour subtyping.
A graph neural network that leverages spatial protein profiles in tissue specimens to model tumour microenvironments as local subgraphs captures distinctive cellular interactions associated with differential clinical outcomes.
A self-supervised deep-learning algorithm searches for and retrieves gigapixel whole-slide images at speeds that are independent of the size of the image repository
A self-supervised model trained on chest X-ray images that lack explicit annotations performs pathology-classification tasks with accuracies comparable to those of radiologists.
Graph deep learning can leverage information in the tumour microenvironment to extract prognostic histopathological features from gigapixel-sized whole-slide images.
This Review discusses the advantages and limitations of self-supervised methods and models for use in medicine and healthcare, and the challenges in collecting unbiased data for their training.
This Review discusses the use of deep generative models, federated learning and transformer models to address challenges in the deployment of machine learning for healthcare.
A cost-aware AI framework facilitates the development of predictive AI models that optimize the trade-off between prediction performance and feature cost.
Deep-learning models trained on external eye photographs can detect diabetic retinopathy, diabetic macular oedema and poor blood glucose control more accurately than models relying on demographic and medical history data.
Ovarian cancer can be predicted with high sensitivity and specificity via a fingerprint obtained, via machine learning, from near-infrared fluorescence emissions of an array of carbon nanotube sensors in serum samples.
Two potent mitophagy inducers, identified and characterized via unsupervised machine learning and a cross-species screening approach, ameliorated the pathology of Alzheimer’s disease in worms and mice.
A generative model that learns mappings between hand kinematics and the associated neural spike trains can be rapidly adapted to new sessions or participants by using limited additional neural data.
Early apoptotic responses to oncolytic virotherapy in mice can be rapidly detected by chemical-exchange-saturation-transfer magnetic resonance fingerprinting, by leveraging a neural network trained with simulated magnetic resonance fingerprints.
Deep methylation sequencing aided by a machine-learning classifier of methylation patterns enables the detection of early cancers from plasma samples at dilution factors as low as 1/10,000.
Deep-learning models trained on retinal fundus images can be used to identify chronic kidney disease and type 2 diabetes and to predict the risk of the progression of these diseases.
An explainable deep-learning system prospectively predicts clinical scores for breast cancer risk from multimodal breast-ultrasound images as accurately as experienced radiologists.
Adversarial learning can be used to develop high-performing networks trained on unannotated medical images of varying image quality, and to adapt pretrained supervised networks to new domain-shifted datasets.
Therapeutic antibodies can be optimized using deep-learning models trained on antibody-mutagenesis libraries to generate antibody variants and predict their antigen specificity.
An automated deep-learning pipeline for chest-X-ray-image standardization, lesion visualization and disease diagnosis can identify viral pneumonia caused by COVID-19, assess its severity, and discriminate it from other types of pneumonia.
A computational method leveraging deep learning and molecular dynamics simulations enables the rapid discovery of antimicrobial peptides with low toxicity and with high potency against diverse Gram-positive and Gram-negative pathogens.
A data-efficient and interpretable deep-learning method for the multi-class classification of whole-slide images that relies only on slide-level labels is applied to the detection of lymph node metastasis and to cancer subtyping.
A deep learning model trained on raw pixel data in hundreds of thousands of echocardiographic videos for the prediction of one-year all-cause mortality outperforms clinical scores and improves predictions by cardiologists.
An open resource comprising chest computed tomography images and 130 clinical features of 1,521 patients with pneumonia, including COVID-19 pneumonia, facilitates the prediction of morbidity and mortality outcomes via deep learning.
Deep-learning models for the automated measurement of retinal-vessel calibre in retinal photographs perform comparably to or better than expert graders in associations of measurements of retinal-vessel calibre with cardiovascular risk factors.
A workflow that segments anatomical structures in slit-lamp images and that annotates pathological features in each image improves the performance of a deep-learning algorithm for the diagnosis of ophthalmic disorders.
A ‘smart’ toilet that uses pressure and motion sensors, biometric identification, urinalysis strips, a computer-vision uroflowmeter and machine learning longitudinally tracks biomarkers of health and disease in the user’s urine and stool.
Machine-learning algorithms trained with retinal fundus images, with
subject metadata or with both data types, predict haemoglobin concentration with
mean absolute errors lower than 0.75 g dl–1 and anaemia
with areas under the curve in the range of 0.74–0.89.
A deep-learning model trained to map 2D projection views of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view.
The analysis of behavioural patterns from standardized video recordings of infants with varying degrees of visual impairment enables, via deep learning, classification of the infants by visual-impairment severity and by ophthalmological condition.
Deep learning can be used to virtually stain autofluorescence images of unlabelled tissue sections, generating images that are equivalent to the histologically stained versions.
An interpretable deep-learning algorithm trained on a small dataset of computed-tomography scans of the head detects acute ICH and classifies the pathology subtypes, with a performance comparable to expert radiologists.
An alert system based on machine learning and trained on surgical data from electronic medical records helps anaesthesiologists prevent hypoxaemia during surgery by providing interpretable real-time predictions.
A deep-learning algorithm can detect polyps in the colon in real time and with high sensitivity and specificity, according to validation studies with prospectively collected images and videos from colonoscopies performed in 1,138 patients.
An assay that uses machine-learning algorithms on phenotypic-biomarker data from live primary cells predicts post-surgical adverse pathology in prostate-cancer and breast cancer tissue samples from patients.
A low-cost point-of-care device that uses contrast-enhanced microholography and deep learning accurately detects aggressive lymphomas in patients referred for aspiration and biopsy of enlarged lymph nodes.
A microfluidic assay that identifies sepsis from a single droplet of diluted blood by measuring the spontaneous motility of neutrophils showed 97% sensitivity and 98% specificity in two independent patient cohorts.
Deep learning predicts, from retinal images, cardiovascular risk factors—such as smoking status, blood pressure and age—not previously thought to be present or quantifiable in these images.
A cloud-based machine-learning software that scores individual guide–target pairs and provides an overall summary score for a given guide that outperforms competing algorithms for the prediction of CRISPR–Cas9 off-target effects.
By taking advantage of stimulated Raman spectroscopy and fibre-laser technology, virtual histology images can be obtained in real time in the operating room, with diagnostic quality comparable with that achieved via conventional histopathology.
A man/machine interface based on the activity of spinal motor neurons reinnervating the muscles of a missing limb in amputees enables the generation of neural signals for potential prosthetic control.
An artificial intelligence agent integrated with a cloud-based platform for multihospital collaboration performs equally as well as ophthalmologists in the diagnosis of congenital cataracts in a series of online tests and a multihospital clinical trial.
Leveraging the expertise of physicians to identify medically meaningful features in ‘counterfactual’ images produced via generative machine learning facilitates the auditing of the inference process of medical-image classifiers, as shown for dermatology images.
A framework for integrating continuous therapeutic monitoring and the development of AI for clinical care may improve patient and health-system outcomes by tightening feedback loops between patient health, clinical interactions and the development of AI models.
The development of machine-learning systems for safer, robust and fairer outcomes should leverage fine-tuning, generalization, explainability and metrics of uncertainty.
Graph neural networks and transformers taking advantage of contextual information and large unannotated multimodal datasets are redefining what is possible in computational medicine.
Weakly supervised deep-learning models for the analysis of whole-slide images from tumour biopsies perform better at prognostic tasks if the models incorporate context from the local microenvironment.
Deep-learning models trained with images of the external part of the eyes, rather than fundus images of the retina, can also be used to detect severe diabetic conditions, such as diabetic retinopathy.
An efficient protocol for the preparation of DNA libraries for the analysis of methylation patterns in cell-free DNA in plasma enhances the sensitivity of bisulfite sequencing for the early detection of lung cancer.
The proliferation of synthetic data in artificial intelligence for medicine and healthcare raises concerns about the vulnerabilities of the software and the challenges of current policy.
Neuropathologies can be classified, on the basis of post-mortem histopathology and by using machine learning, into six transdiagnostic clusters associated with clinical phenotypes.
A deep-learning model for cancer detection trained on a large number of scanned pathology slides and associated diagnosis labels enables model development without the need for pixel-level annotations.
Accurate and explainable detection, via deep learning, of acute intracranial haemorrhage from computed tomography images of the head is achievable with small amounts of data for model training.
Clinical implementations of machine learning that are accurate, robust and interpretable will eventually gain the trust of healthcare providers and patients.
A deep-learning algorithm enables the real-time video-based recognition of polyps during colonoscopy, with sensitivities and specificities surpassing 90%.
A holographic approach relying on small-molecule chromogens enables a rapid and inexpensive test for the accurate classification of aggressive lymphoma at the point of care.
A microfluidic device for assaying neutrophil motility in blood samples from sepsis patients and a machine-learning algorithm trained with the motility data enable a faster and accurate sepsis diagnosis.
Interventional healthcare will evolve from an artisanal craft based on the individual experiences, preferences and traditions of physicians into a discipline that relies on objective decision-making on the basis of large-scale data from heterogeneous sources.
Stimulated Raman spectroscopy combined with machine learning generates histological images for the rapid diagnosis and classification of brain tumours.