Articles in 2021

Filter By:

  • Complex physical processes such as flow fields can be predicted using deep learning methods if good quality sensor data is available, but sparsely placed sensors and sensors that change their position present a problem. A new approach from Kai Fukami and colleagues based on Voronoi tessellation now allows to use data from an arbitrary number of moving sensors to reconstruct a global field.

    • Kai Fukami
    • Romit Maulik
    • Kunihiko Taira
    Article
  • Optimization problems can be described in terms of a statistical physics framework. This offers the possibility to make use of ‘simulated annealing’, which is a procedure to search for a target solution similar to the gradual cooling of a condensed matter system to its ground state. The approach can now be sped up significantly by implementing a model of recurrent neural networks, in a new strategy called variational neural annealing.

    • Mohamed Hibat-Allah
    • Estelle M. Inack
    • Juan Carrasquilla
    Article
  • Can the human brain cope with controlling an extra robotic arm or digit added to the body?

    Editorial
  • Camera trapping is a widely adopted method for monitoring terrestrial mammals. However, a drawback is the amount of human annotation needed to keep pace with continuous data collection. The authors developed a hybrid system of machine learning and humans in the loop, which minimizes annotation load and improves efficiency.

    • Zhongqi Miao
    • Ziwei Liu
    • Wayne M. Getz
    Article
  • The development of extra fingers and arms is an exciting research area in robotics, human–machine interaction and wearable electronics. It is unclear, however, whether humans can adapt and learn to control extra limbs and integrate them into a new sensorimotor representation, without sacrificing their natural abilities. The authors review this topic and describe challenges in allocating neural resources for robotic body augmentation.

    • Giulia Dominijanni
    • Solaiman Shokur
    • Silvestro Micera
    Review Article
  • Combining generative models and reinforcement learning has become a promising direction for computational drug design, but it is challenging to train an efficient model that produces candidate molecules with high diversity. Jike Wang and colleagues present a method, using knowledge distillation, to condense a conditional transformer model to make it usable in reinforcement learning while still generating diverse molecules that optimize multiple molecular properties.

    • Jike Wang
    • Chang-Yu Hsieh
    • Tingjun Hou
    Article
  • A growing number of researchers are developing approaches to improve fairness in machine learning applications in areas such as healthcare, employment and social services, to avoid propagating and amplifying racial and other inequities. An empirical study explores the trade-off between increasing fairness and model accuracy across several social policy areas and finds that this trade-off is negligible in practice.

    • Kit T. Rodolfa
    • Hemank Lamba
    • Rayid Ghani
    Article
  • Turbulent optical distortions in the atmosphere limit the ability of optical technologies such as laser communication and long-distance environmental monitoring. A new method using adversarial networks learns to counter the physical processes underlying the turbulence so that complex optical scenes can be reconstructed.

    • Darui Jin
    • Ying Chen
    • Xiangzhi Bai
    Article
  • The use of sparse signals in spiking neural networks, modelled on biological neurons, offers in principle a highly efficient approach for artificial neural networks when implemented on neuromorphic hardware, but new training approaches are needed to improve performance. Using a new type of activity-regularizing surrogate gradient for backpropagation combined with recurrent networks of tunable and adaptive spiking neurons, state-of-the-art performance for spiking neural networks is demonstrated on benchmarks in the time domain.

    • Bojian Yin
    • Federico Corradi
    • Sander M. Bohté
    Article
  • In the AlphaPilot Challenge, teams compete to fly autonomous drones through an obstacle course as fast as possible. The 2019 winning team MAVLab reflects on the challenge of beating human pilots.

    • C. De Wagter
    • F. Paredes-Vallés
    • G. de Croon
    Challenge Accepted
  • The radiomics features of disease lesions can be learned from medical imaging data, but is it possible to identify interpretable biomarkers that can help make clinical predictions across heterogeneous diseases and data from different modalities?

    • Yue Wang
    • David M. Herrington
    News & Views
  • T-cell immunity is driven by the interaction between peptides presented by major histocompatibility complexes (pMHCs) and T-cell receptors (TCRs). Only a small proportion of neoantigens elicit T-cell responses, and it is not clear which neoantigens are recognized by which TCRs. The authors develop a transfer learning model to predict TCR binding specificity to class-I pMHCs.

    • Tianshi Lu
    • Ze Zhang
    • Tao Wang
    Article
  • Very large neural network models such as GPT-3, which have many billions of parameters, are on the rise, but so far only big tech has the resources to train, deploy and study such models. This needs to change, say Stanford AI researchers, who call for an investment in academic collaborations to build and study large neural networks.

    Editorial
  • Functional subsystems of the macroscale human brain connectome are mapped onto a recurrent neural network and found to perform optimally in a critical regime at the edge of chaos.

    • Nabil Imam
    News & Views
  • Neuromorphic chips that use spikes to encode information could provide fast and energy-efficient computing for ubiquitous embedded systems. A bio-plausible spike-timing solution for training spiking neural networks that makes the most of sparsity is implemented on the BrainScaleS-2 hardware platform.

    • Charlotte Frenkel
    News & Views
  • Spiking neural networks promise fast and energy-efficient information processing. The ‘time-to-first-spike’ coding scheme, where the time elapsed before a neuron’s first spike is utilized as the main variable, is a particularly efficient approach and Göltz and Kriener et al. demonstrate that error backpropagation, an essential ingredient for learning in neural networks, can be implemented in this scheme.

    • J. Göltz
    • L. Kriener
    • M. A. Petrovici
    Article
  • When the training data for machine learning are highly personal or sensitive, collaborative approaches can help a collective of stakeholders to train a model together without having to share any data. But there are still risks to the privacy of the data. This Perspective provides an overview of potential attacks on collaborative machine learning and how these threats could be addressed.

    • Dmitrii Usynin
    • Alexander Ziller
    • Jonathan Passerat-Palmbach
    Perspective