Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Designing antibodies and assessing their biophysical properties for potential therapeutic development is challenging with current computational methods. Ramon et al. have developed a deep learning approach called AbNatiV, based on a vector-quantized variational encoder that accurately assesses the nativeness of antibodies and nanobodies, which are small single-domain antibodies that have recently attracted considerable interest.
Drug design has recently seen immense improvements in computational methods, but models can still struggle generalizing across binding pockets. Feng and colleagues combine a language model with geometric deep learning to provide efficient generation of potential new drugs.
Accurate real-time tracking of dexterous hand movements and interactions has applications in human–computer interaction, the metaverse, robotics and tele-health. Capturing realistic hand movements is challenging due to the large number of articulations and degrees of freedom. Tashakori and colleagues report accurate and dynamic tracking of articulated hand and finger movements using machine-learning powered stretchable, washable smart gloves.
Magnetic microrobots are of considerable interest for non-invasive biomedical applications but it is challenging to develop a general strategy for controlling microrobot positions, for varying configurations and environments. Choi et al. develop a reinforcement learning control method, training the model in a simulation environment for initial exploration after which the learning process is transferred to a physical electromagnetic actuation system.
Multi-animal behaviour quantification is pivotal for deciphering animal social behaviours and has broad applications in neuroscience and ecology. Han and colleagues develop a few-shot learning framework for multi-animal 3D pose estimation, identity recognition and social behaviour classification.
Feed-forward neural networks have become powerful tools in machine learning, but their behaviour during optimization is still not well understood. Ciceri and colleagues find that during optimization, class representations first separate and then rejoin, prompted by specific elements of the training set.
The implementation of particle-tracking techniques with deep neural networks is a promising way to determine particle motion within complex flow structures. A graph neural network-enhanced method enables accurate particle tracking by significantly reducing the number of lost trajectories.
New research reveals a duality between neural network weights and neuron activities that enables a geometric decomposition of the generalization gap. The framework provides a way to interpret the effects of regularization schemes such as stochastic gradient descent and dropout on generalization — and to improve upon these methods.
A framework for training artificial neural networks in physical space allows neuroscientists to build networks that look and function like real brains.
Borrowing the format of public competitions from engineering and computer science, a new type of challenge in 2023 tested real-world AI applications with legal assessments based on the EU AI Act.
Machine learning methods in cheminformatics have made great progress in using chemical structures of molecules, but a large portion of textual information remains scarcely explored. Liu and colleagues trained MoleculeSTM, a foundation model that aligns the structure and text modalities through contrastive learning, and show its utility on the downstream tasks of structure–text retrieval, text-guided editing and molecular property prediction.
Theoretical frameworks aiming to understand deep learning rely on a so-called infinite-width limit, in which the ratio between the width of hidden layers and the training set size goes to zero. Pacelli and colleagues go beyond this restrictive framework by computing the partition function and generalization properties of fully connected, nonlinear neural networks, both with one and with multiple hidden layers, for the practically more relevant scenario in which the above ratio is finite and arbitrary.
Skin-like flexible electronics (electronic skin) has great potential in medical practices to enable continuous tracking of physical and biochemical information. Xu et al. review the integration of AI methods and electronic skins, especially how data collected from sensors are processed by AI to extract features for human–machine interactions and health monitoring purposes.
Interest in using large language models such as ChatGPT has grown rapidly, but concerns about safe and responsible use have emerged, in part because adversarial prompts can bypass existing safeguards with so-called jailbreak attacks. Wu et al. build a dataset of various types of jailbreak attack prompt and demonstrate a simple but effective technique to counter these attacks by encapsulating users’ prompts in another standard prompt that reminds ChatGPT to respond responsibly.
Machine learning models have been widely used in the inverse design of new materials, but typically only linear properties could be targeted. Bastek and Kochmann show that video diffusion generative models can produce the nonlinear deformation and stress response of cellular materials under large-scale compression.
Virtual drug design has seen recent progress in methods that can generate new molecules with specific properties. Separately, methods have also improved in the task of computationally predicting the outcome of chemical reactions. Qiang and colleagues use the close relation of the two problems to train a model that aims at solving both tasks.
Data-driven surrogate models are used in computational physics and engineering to greatly speed up evaluations of the properties of partial differential equations, but they come with a heavy computational cost associated with training. Pestourie et al. combine a low-fidelity physics model with a generative deep neural network and demonstrate improved accuracy–cost trade-offs compared with standard deep neural networks and high-fidelity numerical solvers.