Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
DeepMind’s AlphaFold recently demonstrated the potential of deep learning for protein structure prediction. DeepFragLib, a new protein-specific fragment library built using deep neural networks, may have advanced the field to the next stage.
To prepare robots for working autonomously under real-world conditions, their resilience and capability to recover from damage needs to improve radically. A fresh take on robot design suggests that instead of adapting the robotic control strategy, we could enable robots to change their physical bodies to recover more effectively from damage.
Classic theories of reinforcement learning and neuromodulation rely on reward prediction errors. A new machine learning technique relies on neuromodulatory signals that are optimized for specific tasks, which may lead to better AI and better explanations of neuroscience data.
Humans infer much of the intentions of others by just looking at their gaze. Similarly, we want to understand how machine learning systems solve a problem. New tools are developed to find out what strategies a learning machine is using, such as what it is paying attention to when classifying images.
To be useful in a variety of daily tasks, robots must be able to interact physically with humans and infer how to be most helpful. A new theory for interactive robot control allows a robot to learn when to assist or challenge a human during reaching movements.