Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Deep space exploration missions will require new technologies that can support astronaut health systems, as well as biological monitoring and research systems that can function independently from Earth-based mission control centres. A NASA workshop explored how artificial intelligence advances could help address these challenges and, in this second of two Review articles based on the findings from the workshop, the intersection between artificial intelligence and space biology is discussed.
Deep-space exploration missions require new technologies that can support astronaut health systems as well as biological monitoring and research systems that can function independently from Earth-based mission control centres. A NASA workshop explored how artificial intelligence advances could help address these challenges and, in this first of two Review articles based on the findings from the workshop, a vision for autonomous biomonitoring and precision space health is discussed.
Simulated data is an alternative to real data for medical applications where interventional data are needed to train AI-based systems. Gao and colleagues develop a model transfer paradigm to train deep networks on synthetic X-ray data and corresponding labels generated using simulation techniques from CT scans. The approach establishes synthetic data as a viable resource for developing machine learning models that apply to real clinical data.
Stochastic reaction networks involve solving a system of ordinary differential equations, which becomes challenging as the number of reactive species grows, but a new approach based on evolving a variational autoregressive neural network provides an efficient way to track time evolution of the joint probability distribution for general reaction networks.
Computational models can help predict metabolic profiles of microbial communities such as human gut microbiomes or environmental microbiomes, but they lack generalizability and interpretability. To address this challenge, Wang et al. report a deep learning approach for metabolic profile prediction called mNODE that incorporates a neural network module with hidden layers described by ordinary differential equations.
Various post-hoc interpretability methods exist to evaluate the results of machine learning classification and prediction tasks. To better understand the performance and reliability of such methods, which is particularly necessary in high-risk applications, Turbe et al. have developed a framework for quantitative comparison of post-hoc interpretability approaches in time-series classification.
Metal–organic frameworks are of high interest for a range of energy and environmental applications due to their stable gas storage properties. A new machine learning approach based on a pre-trained multi-modal transformer can be fine-tuned with small datasets to predict structure-property relationships and design new metal-organic frameworks for a range of specific tasks.
An increasing number of regulations demand transparency in automated decision-making processes such as in automated online recruitment. To provide meaningful transparency, Sloane et al. propose the use of ‘nutritional’ labels that display specific information about an automated decision system, depending on the context.
Neuro-symbolic artificial intelligence approaches display both perception and reasoning capabilities, but inherit the limitations of their individual deep learning and symbolic artificial intelligence components. By combining neural networks and vector-symbolic architectures, Hersche and colleagues propose a neuro-vector-symbolic framework that can solve Raven’s progressive matrices tests faster and more accurately than other state-of-the-art methods.
Explanatory interactive machine learning methods have been developed to facilitate the learning process between the machine and the user. Friedrich et al. provide a unification of various explanatory interactive machine learning methods into a single typology, and present benchmarks for evaluating such methods.
Machine learning methods can predict and recognize binding patterns between T-cell receptors and human antigens, but they struggle with antigens for which no or little data exist regarding interactions with the immune system. A new method called PanPep based on meta-learning can learn quickly on new binding prediction tasks and accurately predicts pairing between T-cell receptors and new antigens.
High-quality annotation of datasets is critical for machine-learning-based biomedical image analysis. However, a detailed examination of recent image competitions reveals a gap between annotators’ needs and quality of labelling instructions. It is also found that annotator performance can be substantially improved by providing exemplary images.
Training a deep neural network can be costly but training time is reduced when a pre-trained network can be adapted to different use cases. Ideally, only a small number of parameters needs to be changed in this process of fine-tuning, which can then be more easily distributed. In this Analysis, different methods of fine-tuning with only a small number of parameters are compared on a large set of natural language processing tasks.
We explore the intersection between algorithms and the State from the perspectives of legislative action, public perception and the use of AI in public administration. Taking India as a case study, we discuss the potential fallout from the absence of rigorous scholarship on such questions for countries in the Global South.
A recent data competition steers clear from leaderboard chasing and promotes the use of a diverse range of metrics to develop rounded, practical algorithms.
Predicting RNA degradation is a fundamental task in designing RNA-based therapeutic agents. Dual crowdsourcing efforts for dataset creation and machine learning were organized to learn biological rules and strategies for predicting RNA stability.
Reinforcement learning is a powerful technique to learn complex behaviours, but in the context of self-driving vehicles it might result in unsafe behaviour in previously unseen situations. Cao et al. create a confidence-aware method that improves through reinforcement learning but reverts to safe behaviour when a situation is new.
Developing proprioception systems for flexible structures such as soft robots is a challenge. Hu et al. report a stretchable e-skin for soft robot proprioception. Combined with deep learning, the e-skin enables high-resolution 3D geometry reconstruction of the soft robot and can be applied in many scenarios, such as human–robot interaction.
The mechanical signals of the laryngeal vocal organ have not been well utilized by human speech processing technology. The authors develop a prototype of a wearable artificial throat that can sense speech- and vocalization-related actions. The results suggest a new technological pathway for speech recognition and interaction systems.
When learning a causal model from data, deriving counterfactual examples from the model can help to evaluate how plausible the mechanisms are and create hypotheses that can be tested with new data. Vlontzos and colleagues develop a deep learning-based method for answering counterfactual queries that can deal with categorical variables, rather than only binary ones, using the notion of ‘counterfactual ordering’.