Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Co-designing hardware platforms and neural network software can help improve the computational efficiency and training affordability of deep learning implementations. A new approach designed for graph learning with echo state neural networks makes use of in-memory computing with resistive memory and shows up to a 35 times improvement in the energy efficiency and 99% reduction in training cost for graph classification on large datasets.
Disease phenotypes can be predicted from genetic profiles, but diseases with complex, non-additive interactions between genes are hard to disentangle. An approach called DiseaseCapsule makes use of capsule networks to identify the hierarchical structure in genomic data and can predict complex diseases such as amyotrophic lateral sclerosis with high accuracy.
Predicting drug–target interaction with computational models has attracted a lot of attention, but it is a difficult problem to generalize across domains to out-of-distribution data. Bai et al. present here a method that aims to model local interactions of proteins and drug molecules while being interpretable and provide cross-domain generalization.
In situations where some risk of injury is unavoidable for self-driving vehicles, how risk is distributed becomes an ethical question. Geisslinger and colleagues have developed a planning algorithm that takes five ethical principles into account and aims to comply with the emerging EU regulatory recommendations.
Despite the promise of medical artificial intelligence applications, their acceptance in real-world clinical settings is low, with lack of transparency and trust being barriers that need to be overcome. We discuss the importance of the collaborative process in medical artificial intelligence, whereby experts from various fields work together and tackle transparency issues and build trust over time.
The organizers of the EvalRS recommender systems competition argue that accuracy should not be the only goal and explain how they took robustness and fairness into account.
To fully leverage big data, they need to be shared across institutions in a manner compliant with privacy considerations and the EU General Data Protection Regulation (GDPR). Federated machine learning is a promising option.
Gathering big datasets has become an essential component of machine learning in many scientific areas, but it is unavoidable that some data values are missing. An important and growing effect that needs careful attention, especially when heterogeneous data sources are combined, is that of structured missingness, where data values are missing not at random, but with a specific structure.
Olfactory navigation is a well-studied topic in insect behaviour, but many aspects of the challenging task of odour plume tracking are unknown. In a deep reinforcement learning approach, artificial agents are trained to produce (in silico) trajectories to localize the source of an odour plume, showing dynamics that mimic real insect behaviours.
When it comes to reasoning about the motion of physical objects, humans have natural intuitive physics knowledge. To test how good artificial learning agents are in similar predictive abilities, Xue and colleagues present a benchmark based on a two-dimensional physics environment in which 15 physical reasoning skills are measured.
AI language modelling and generation approaches have developed fast in the last decade, opening promising new directions in human–AI collaboration. An AI-in-the loop conversational system called HAILEY is developed to empower peer supporters in providing empathic responses to mental health support seekers.
A recent case of a flawed medical AI system that was backed by public funding provides an opportunity to discuss the impact of government policies and regulation in AI.
The reconstruction of spatially resolved information of an extended object from an observed intensity diffraction pattern in holographic imaging is a challenging problem. By incorporating an explicit physical model, Lee and colleagues propose a deep learning method that can be used in holographic image reconstruction under physical perturbations and which generalizes well beyond object-to-sensor distances and pixel sizes seen during training.
Despite recent improvements in microscopy acquisition methods, extracting quantitative information from biological experiments in crowded conditions is a challenging task. Pineda and colleagues propose a geometric deep-learning-based framework for automated trajectory linking and dynamical property estimation that is able to effectively deal with complex biological scenarios.
Machine translation of languages can now automatically detect different cell types from single-cell transcriptomic data. Such a feat opens the prospect of dissecting complex clinical samples such as heterogenous tumours at scale.