Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Large language models have recently emerged with extraordinary capabilities, and these methods can be applied to model other kinds of sequence, such as string representations of molecules. Ross and colleagues have created a transformer-based model, trained on a large dataset of molecules, which provides good results on property prediction tasks.
A goal of AI is to develop autonomous artificial agents with a wide set of skills. The authors propose the immersion of intrinsically motivated agents within rich socio-cultural worlds, focusing on language as a way for artificial agents to develop new cognitive functions.
2022 has seen eye-catching developments in AI applications. Work is needed to ensure that ethical reflection and responsible publication practices are keeping pace.
The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.
Liquid chromatography–tandem mass spectrometry (LC-MS2) provides high-throughput screening of molecules with a large number of features. But these features are difficult to associate with specific molecular structures of each sample. To improve structure prediction from these features, Bach et al. propose a machine learning model trained to also take into account stereochemistry to combine the different kinds of features provided by LC-MS2.
The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.
There is growing interest in using sophisticated machine learning models for the prediction of molecular properties, such as potency of novel drugs. However, Janela and Bajorath show that simple nearest-neighbour analysis meets or exceeds the accuracy of state-of-the-art complex machine learning methods and that randomized prediction models still reproduce compound potency values within an order of magnitude.
Applying deep reinforcement learning to robot control poses challenges. The authors review methods for transferring deep reinforcement learning policies learned in simulation to the real world.
The design of legged robots with agility and speed is challenging. The authors present a method with reinforcement learning-based controllers for locomotion control of quadruped robots. The pipeline achieves improvements in performance, such as running speed.
Predicting RNA degradation is a fundamental task in designing RNA-based therapeutics. Two crowdsourcing platforms, Kaggle and Eterna, united to develop accurate deep learning models for RNA degradation on a timescale of 6 months.
Antibodies are an essential class of therapeutics but low breadth or off-target binding are major concerns for antibody–drug efficiency and safety. To predict which targets an antibody can neutralize, a machine learning pipeline based on an adaptive graph convolutional network architecture is proposed that learns the binding landscape of antibodies to multiple mutated viruses at the same time.
The problem of reconstructing full-field quantities from incomplete observations arises in various real-world applications. Güemes and colleagues propose a super-resolution algorithm based on a generative adversarial network that can achieve reconstruction of the underlying field from random sparse measurements without requiring full-field high-resolution training data.
Evolutionary computation is a very active field of research, with an ever-growing number of metaheuristic optimization algorithms being published. A serious problem plaguing the field is the use of inadequate benchmarks. Kudela exposes the issue and provides recommendations that can help to fairly evaluate and compare new methods.
Learning minimal representations of dynamical systems is essential for mathematical modelling and prediction in science and engineering. Floryan and Graham propose a deep learning framework able to estimate accurate global dynamical models by sewing together multiple local representations learnt from high-dimensional time-series data.
Advances in ultra-widefield retinal imaging have created a need for automated disease detection. Engelmann and colleagues develop a deep learning model for the detection of retinal diseases. They evaluate it under more realistic conditions than has been considered previously and investigate what regions of ultra-widefield images are important for the performance of such a model.
In recent years, deep learning techniques have enhanced the possibility to extract useful, high-resolution physical information from electron and scanning probe microscopy images. AtomAI, an open-source software package, can accelerate this process by bringing deep learning and simulation tools into a single framework for a range of instruments.