Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Traditional sensing techniques apply computational analysis at the output of the sensor hardware to separate signal from noise. A new, more holistic and potentially more powerful approach proposed in this Perspective is designing intelligent sensor systems that ‘lock-in’ to optimal sensing of data, making use of machine leaning strategies.
Methods are available to support clinical decisions regarding adjuvant therapies in breast cancer, but they have limitations in accuracy, generalizability and interpretability. Alaa et al. present an automated machine learning model of breast cancer that predicts patient survival and adjuvant treatment benefit to guide personalized therapeutic decisions.
With edge computing on custom hardware, real-time inference with deep neural networks can reach the nanosecond timescale. An important application in this regime is event processing at particle collision detectors like those at the Large Hadron Collider (LHC). To ensure high performance as well as reduced resource consumption, a method is developed, and made available as an extension of the Keras library, to automatically design optimal quantization of the different layers in a deep neural network.
Single-cell RNA sequencing efforts have made large amounts of data available for transcriptomics research. Simon and colleagues develop a neural network embedding approach that avoids batch effects, such that it can rapidly and efficiently integrate large datasets from different studies.
The response of the body to drugs follows complex dynamical processes that can be difficult to predict. Lu and colleagues combine a neural network approach with pharmacokinetic/pharmacodynamic modelling to learn these complex dynamics.
Accurate and fair medical machine learning requires large amounts and diverse data to train on. Privacy-preserving methods such as federated learning can help improve machine learning models by making use of datasets in different hospitals and institutes while the data stays where it is collected.
Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.
Online targeted advertising fuelled by machine learning can lead to the isolation of individual consumers. This problem of ‘epistemic fragmentation’ cannot be tackled with current regulation strategies and a new, civic model of governance for advertising is needed.
Drug repurposing provides a way to identify effective treatments more quickly and economically. To speed up the search for antiviral treatment of COVID-19, a new platform provides a range of computational models to identify drugs with potential anti-COVID-19 effects.
Neural networks are becoming increasingly popular for applications in various domains, but in practice, further methods are necessary to make sure the models are learning patterns that agree with prior knowledge about the domain. A new approach introduces an explanation method, called ‘expected gradients’, that enables training with theoretically motivated feature attribution priors, to improve model performance on real-world tasks.
Monoclonalization, the isolation and expansion of a single cell derived from a cultured population, is an essential step in large-scale human cell culture and experiments. A new deep learning-based workflow called Monoqlo automatically detects colony presence and identifies clonality from cellular imaging, enabling single-cell selection protocols to be scalable while minimizing technical variability.
The urgency of the developing COVID-19 epidemic has led to a large number of novel diagnostic approaches, many of which use machine learning. DeGrave and colleagues use explainable AI techniques to analyse a selection of these approaches and find that the methods frequently learn to identify features unrelated to the actual disease.
Gaining access to medical data to train AI applications can present problems due to patient privacy or proprietary interests. A way forward can be privacy-preserving federated learning schemes. Kaissis, Ziller and colleagues demonstrate here their open source framework for privacy-preserving medical image analysis in a remote inference scenario.
In the last few years, computational protein structure prediction has greatly advanced by combining deep learning including convolutional residual networks (ResNet) with co-evolution data. A new study finds that using deeper and wider ResNets improves predictions in the absence of co-evolution information, suggesting that the ResNets do not not simply de-noise co-evolution signals, but instead may learn important protein sequence–structure relationships.
Calcium imaging is a valuable tool for recording in vivo neural activity, but the task of extracting signals of individual neurons is computationally challenging. Bao and colleagues present a U-Net-based method that is both accurate and fast enough to potentially allow real-time processing and closed-loop experiments.
A white paper from Partnership on AI provides timely advice on tackling the urgent challenge of navigating risks of AI research and responsible publication.
Modern machine learning approaches, such as deep neural networks, generalize well despite interpolating noisy data, in contrast with textbook wisdom. Mitra describes the phenomenon of statistically consistent interpolation (SCI) to clarify why data interpolation succeeds, and discusses how SCI elucidates the differing approaches to modelling natural phenomena represented in modern machine learning, traditional physical theory and biological brains.