Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Neural network force fields promise to bypass the computationally expensive quantum mechanical calculations typically required to investigate complex materials, such as lithium-ion batteries. Mailoa et al. accelerate these approaches with an architecture that exploits both rotation-invariant and -covariant features separately.
Optoacoustic imaging can achieve high spatial and temporal resolution but image quality is often compromised by suboptimal data acquisition. A new method employing deep learning to recover high-quality images from sparse or limited-view optoacoustic scans has been developed and demonstrated for whole-body mouse imaging in vivo.
Labelling training data to train machine learning models is very time intense. A new method shows that content transformation can be effectively learned from generated data, avoiding the need for any manual labelling in segmentation and classification tasks.
To safely operate in the real world, robots need to evaluate how confident they are about what they see. A new competition challenges computer vision algorithms to not just detect and localize objects, but also report how certain they are.
In order for the neuromorphic research field to advance into the mainstream of computing, it needs to start quantifying gains, standardize on benchmarks and focus on feasible application challenges.
Brain–machine interfaces were envisioned already in the 1940s by Norbert Wiener, the father of cybernetics. The opportunities for enhancing human capabilities and restoring functions are now quickly expanding with a combination of advances in machine learning, smart materials and robotics.
Brain–machine interfaces using steady-state visually evoked potentials (SSVEPs) show promise in therapeutic applications. With a combination of innovations in flexible and soft electronics and in deep learning approaches to classify potentials from two channels and from any subject, a compact, wireless and universal SSVEP interface is designed. Subjects can operate a wheelchair in real time with eye movements while wearing the new brain–machine interface.
A combination of engineering advances shows promise for myoelectric prosthetic hands that are controlled by a user’s remaining muscle activity. Fine finger movements are decoded from surface electromyograms with machine learning algorithms and this is combined with a robotic controller that is active only during object grasping to assist in maximizing contact. This shared control scheme allows user-controlled movements when high dexterity is desired, but also assisted grasping when robustness is required.
Memristive devices can provide energy-efficient neural network implementations, but they must be tailored to suit different network architectures. Wang et al. develop a trainable weight-sharing mechanism for memristor-based CNNs and ConvLSTMs, achieving a 75% reduction in weights without compromising accuracy.
Controlling the flow and representation of information in deep neural networks is fundamental to making networks intelligible. Bergomi et al introduce a mathematical framework in which the space of possible operators representing the data is constrained by using symmetries. This constrained space is still suitable for machine learning: operators can be efficiently computed, approximated and parameterized for optimization.
As AI technology develops rapidly, it is widely recognized that ethical guidelines are required for safe and fair implementation in society. But is it possible to agree on what is ‘ethical AI’? A detailed analysis of 84 AI ethics reports around the world, from national and international organizations, companies and institutes, explores this question, finding a convergence around core principles but substantial divergence on practical implementation.
As machine learning methods are adopted across the scientific community, strong code sharing and reviewing practices are required. Our policy mandates that code essential to the main results is made available to reviewers, and to readers on publication. Our partnership with Code Ocean helps authors and reviewers navigate this process.
DeepMind’s AlphaFold recently demonstrated the potential of deep learning for protein structure prediction. DeepFragLib, a new protein-specific fragment library built using deep neural networks, may have advanced the field to the next stage.
As nations come together in Tokyo next summer to celebrate the spirit of human potential in the 2020 Olympic Games, they will have a chance to take part in another international competition hosted by Japan soon after, this time with challenges designed for robot contenders.
To create less harmful technologies and ignite positive social change, AI engineers need to enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, say Mona Sloane and Emanuel Moss.
An approach to protein structure prediction is to assemble candidate structures from template fragments, which are extracted from known protein structures. Wang et al. demonstrate that combining deep neural network architectures with a relatively small but high-resolution fragment dataset can improve the quality of the sample fragment libraries used for protein structure prediction.
Traditional robotic grasping focuses on manipulating an object, often without considering the goal or task involved in the movement. The authors propose a new metric for success in manipulation that is based on the task itself.
When neural networks are retrained to solve more than one problem, they tend to forget what they have learned earlier. Here, the authors propose orthogonal weights modification, a method to avoid this so-called catastrophic forgetting problem. Capitalizing on such an ability, a new module is introduced to enable the network to continually learn context-dependent processing.
Deep neural networks can contain arbitrary mathematical operators, as long as they are derivable. The authors investigate how knowledge about a problem can be incorporated into machine learning through the use of operators that are related to the problem.