Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Recurrent neural networks are flexible architectures that can perform a variety of complex, time-dependent computations. Kim and Bassett introduce an alternative, ‘programming’-like computational framework to determine the appropriate network parameters for a specific task without the need for supervised training.
Robots that can change their shape offer flexible functionality. A modular robotic platform is shown that implements physical polygon meshing, by combining triangles with sides of adjustable lengths, allowing flexible three-dimensional shape configurations.
Worldwide weather station forecasting is challenging because of high computational costs and the difficulty of modelling spatiotemporal correlations from partial observations. Wu et al. propose a transformer-based method that can reconstruct such complex correlations from scattered weather stations, leading to efficient and interpretable state-of-the-art forecasts.
Designing methods to induce explicit and deep structural constraints in latent space at the sample level is an open problem in natural language processing-derived methods relying on transfer learning. McDermott and colleagues propose and analyse a pre-training framework imposing such structural constraints, and empirically demonstrate its advantages by showing that it outperforms existing pre-training state-of-the-art methods.
While single-cell multimodal datasets allow for the measurement of individual cells to understand cellular and molecular mechanisms, generating multimodal data for many cells is costly and challenging. Cohen Kalafut and colleagues develop a machine learning model capable of imputing single-cell modalities and prioritizing multimodal features, such as gene expression, chromatin accessibility and electrophysiology.
Immersive virtual reality requires artificial sensory perceptions to simulate what we feel and how we interact in the natural environment. Zhang and colleagues present a first-person, human-triggered, active haptic device that allows users to experience mechanical touching with various stiffness perceptions from positive to negative ranges, achieved by the unique benefits of curved origami.
Memory efficient online training of recurrent spiking neural networks without compromising accuracy is an open challenge in neuromorphic computing. Yin and colleagues demonstrate that training a recurrent neural network consisting of so-called liquid time-constant spiking neurons using an algorithm called Forward-Propagation Through Time allows for online learning and state-of-the-art performance at a reduced computational cost compared with existing approaches.
To detect phenotype-related cell subpopulations from single-cell data, appropriate feature sets need to be chosen or learned simultaneously. Ren et al. present here a tool based on Learning with Rejection, a method that during training learns features from cells that can be predicted with high confidence, while cells that the model is not yet certain about are rejected.
Particle tracking velocimetry to estimate particle displacements in fluid flows in complex experimental scenarios is a challenging task and often comes with high computational cost. Liang and colleagues propose a graph neural network and optimal transport-based algorithm that can greatly improve the accuracy of existing tracking algorithms in real-world applications.
Online commerce is increasingly relying on pricing algorithms. Using a network-based approach inspired by adversarial machine learning, a firm can learn the strategy of its competitors and use it to unilaterally increase all firms’ profits. This approach, termed as ‘adversarial collusion’, calls for new regulatory measures.
Deep learning can be used to predict molecular properties, but such methods usually need a large amount of data and are hard to generalize to different chemical spaces. To provide a useful primer for deep learning models models, Fang and colleagues use contrastive learning and a knowledge graph based on the Periodic Table and Wikipedia pages on chemical functional groups.
Biomedical heterogeneous networks offer potentially rich information for computational drug design approaches, but fully labelled multimodal data are rare. To learn useful representations from diverse and unlabelled data, Wang et al. combine multiple self-supervised tasks to train a graph-attention-based model.
Evolutionary computation methods can find useful solutions for many complex real-world science and engineering problems, but in general there is no guarantee for finding the best solution. This challenge can be tackled with a new framework incorporating machine learning that helps evolutionary methods to avoid local optima.
Transformer models are gaining increasing popularity in modelling natural language as they can produce human-sounding text by iteratively predicting the next word in a sentence. Born and Manica apply the idea of Transformer-based text completion to property prediction of chemical compounds by providing the context of a problem and having the model complete the missing information.
Sepsis treatment needs to be well timed to be effective and to avoid antibiotic resistance. Machine learning can help to predict optimal treatment timing, but confounders in the data hamper reliability. Liu and colleagues present a method to predict patient-specific treatment effects with increased accuracy, accompanied by an uncertainty estimate.
The potential of deep learning in pathological prognosis has been hampered by limited interpretability in clinical applications. Liang and colleagues present a human-centric deep learning framework that supports the discovery of prognostic biomarkers in an interpretable way.
Generative models in cheminformatics depend on molecules being representable as structured data, such as the simplified molecular-input line-entry system (SMILES). Mokaya and colleagues investigated how the choice of representation influences the quality of generated compounds, and found that string-based representations can hinder performance in a curriculum learning setting.
Computational modelling of the interactions between T-cell receptors (TCRs) and epitopes is a crucial yet challenging scientific problem. Peng and colleagues develop a deep learning model to capture TCR–epitope binding patterns, providing useful insights for understanding TCR recognition.
Simulated data is an alternative to real data for medical applications where interventional data are needed to train AI-based systems. Gao and colleagues develop a model transfer paradigm to train deep networks on synthetic X-ray data and corresponding labels generated using simulation techniques from CT scans. The approach establishes synthetic data as a viable resource for developing machine learning models that apply to real clinical data.
Stochastic reaction networks involve solving a system of ordinary differential equations, which becomes challenging as the number of reactive species grows, but a new approach based on evolving a variational autoregressive neural network provides an efficient way to track time evolution of the joint probability distribution for general reaction networks.