Learning algorithms articles within Nature Communications

Featured

  • Article
    | Open Access

    Previous work has shown that natural cardiac rhythms modulate the perception and reaction to sensory cues through changes in associated neural signals. Here, the authors show that sensitivity to prediction errors during reward learning is related to the phase of the cardiac cycle.

    • Elsa F. Fouragnan
    • , Billy Hosking
    •  & Alejandra Sel
  • Article
    | Open Access

    Encoding and downsampling images is key for visual prostheses. Here, the authors show that an actor-model framework using the inherent computation of the retinal network yields better performance in downsampling images compared to learning-free methods.

    • Franklin Leong
    • , Babak Rahmani
    •  & Diego Ghezzi
  • Article
    | Open Access

    High computational cost severely limit the applications of biophysically detailed multi-compartment models. Here, the authors present DeepDendrite, a GPU-optimized tool that drastically accelerates detailed neuron simulations for neuroscience and AI, enabling exploration of intricate neuronal processes and dendritic learning mechanisms in these fields.

    • Yichen Zhang
    • , Gan He
    •  & Tiejun Huang
  • Article
    | Open Access

    The biological plausibility of backpropagation and its relationship with synaptic plasticity remain open questions. The authors propose a meta-learning approach to discover interpretable plasticity rules to train neural networks under biological constraints. The meta-learned rules boost the learning efficiency via bio-inspired synaptic plasticity.

    • Navid Shervani-Tabar
    •  & Robert Rosenbaum
  • Article
    | Open Access

    How we juggle morally conflicting outcomes during learning remains unknown. Here, by comparing variants of reinforcement learning models, the authors show that participants differ substantially in their preference, with some choosing actions that benefit themselves while others choose actions that prevent harm.

    • Laura Fornari
    • , Kalliopi Ioumpa
    •  & Valeria Gazzola
  • Article
    | Open Access

    Behavioral feedback is critical for learning, but it is often not available. Here, the authors introduce a deep learning model in which the cerebellum provides the cerebrum with feedback predictions, thereby facilitating learning, reducing dysmetria, and making several experimental predictions.

    • Ellen Boven
    • , Joseph Pemberton
    •  & Rui Ponte Costa
  • Article
    | Open Access

    Pain fluctuates over time in ways that are non-random. Here, the authors show that the human brain can learn to predict these changes in a manner consistent with optimal Bayesian inference by engaging sensorimotor, parietal, and premotor regions.

    • Flavia Mancini
    • , Suyi Zhang
    •  & Ben Seymour
  • Article
    | Open Access

    Artificial intelligence approaches inspired by human cognitive function have usually single learned ability. The authors propose a multimodal foundation model that demonstrates the cross-domain learning and adaptation for broad range of downstream cognitive tasks.

    • Nanyi Fei
    • , Zhiwu Lu
    •  & Ji-Rong Wen
  • Article
    | Open Access

    Brain-inspired neural generative models can be designed to learn complex probability distributions from data. Here the authors propose a neural generative computational framework, inspired by the theory of predictive processing in the brain, that facilitates parallel computing for complex tasks.

    • Alexander Ororbia
    •  & Daniel Kifer
  • Article
    | Open Access

    Tasks involving continual learning and adaptation to real-time scenarios remain challenging for artificial neural networks in contrast to real brain. The authors propose here a brain-inspired optimizer based on mechanisms of synaptic integration and strength regulation for improved performance of both artificial and spiking neural networks.

    • Giorgia Dellaferrera
    • , Stanisław Woźniak
    •  & Evangelos Eleftheriou
  • Article
    | Open Access

    It is unknown whether object category learning can be formed purely through domain general learning of natural image structure. Here the authors show that human visual brain responses to objects are well-captured by self-supervised deep neural network models trained without labels, supporting a domain-general account.

    • Talia Konkle
    •  & George A. Alvarez
  • Article
    | Open Access

    Chronic nicotine exposure impacts various components of decision-making processes, such as exploratory behaviors. Here, the authors identify the cellular mechanism and show that chronic nicotine exposure increases the tonic activity of VTA dopaminergic neurons and reduces exploration in mice.

    • Malou Dongelmans
    • , Romain Durand-de Cuttoli
    •  & Philippe Faure
  • Article
    | Open Access

    Human learning depends on opposing effects of two noise factors: volatility and stochasticity. Here the authors present a model of learning that shows how and why joint estimation of these factors is important for understanding healthy and pathological learning.

    • Payam Piray
    •  & Nathaniel D. Daw
  • Article
    | Open Access

    The authors show that heterogeneity in spiking neural networks improves accuracy and robustness of prediction for complex information processing tasks, results in optimal parameter distribution similar to experimental data and is metabolically efficient for learning tasks at varying timescales.

    • Nicolas Perez-Nieves
    • , Vincent C. H. Leung
    •  & Dan F. M. Goodman
  • Article
    | Open Access

    Models of decision making have so far been unable to account for how humans’ choices can be flexible yet efficient. Here the authors present a linear reinforcement learning model which explains both flexibility, and rare limitations such as habits, as arising from efficient approximate computation

    • Payam Piray
    •  & Nathaniel D. Daw
  • Article
    | Open Access

    Dopamine neurons in the mushroom body help Drosophila learn to approach rewards and avoid punishments. Here, the authors propose a model in which dopaminergic learning signals encode reinforcement prediction errors by utilising feedback reinforcement predictions from mushroom body output neurons.

    • James E. M. Bennett
    • , Andrew Philippides
    •  & Thomas Nowotny
  • Article
    | Open Access

    Deep neural networks usually rapidly forget the previously learned tasks while training new ones. Laborieux et al. propose a method for training binarized neural networks inspired by neuronal metaplasticity that allows to avoid catastrophic forgetting and is relevant for neuromorphic applications.

    • Axel Laborieux
    • , Maxence Ernoult
    •  & Damien Querlioz
  • Article
    | Open Access

    Higher-order sequence learning using a structured graph representation - clone-structured cognitive graphs (CSCG) – can explain how the hippocampus learns cognitive maps. CSCG provides novel explanations for transferable schemas and transitive inference in the hippocampus, and for how place cells, splitter cells, lap-cells and a variety of phenomena emerge from the same set of fundamental principles.

    • Dileep George
    • , Rajeev V. Rikhye
    •  & Miguel Lázaro-Gredilla
  • Article
    | Open Access

    Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.

    • Stefano Recanatesi
    • , Matthew Farrell
    •  & Eric Shea-Brown
  • Article
    | Open Access

    Recent critical commentaries unfavorably compare deep learning (DL) with standard machine learning (SML) for brain imaging data analysis. Here, the authors show that if trained following prevalent DL practices, DL methods substantially improve compared to SML methods by encoding robust discriminative brain representations.

    • Anees Abrol
    • , Zening Fu
    •  & Vince Calhoun
  • Article
    | Open Access

    Surprisingly, motor cortex becomes less involved in performing skilled motor behaviors as they are practiced. This is addressed by a model of two descending pathways featuring different types of learning: fast learning in a cortical pathway to maximize rewards and slow learning in a subcortical pathway to reinforce behaviors through repetition.

    • James M. Murray
    •  & G. Sean Escola
  • Article
    | Open Access

    Humans can unconsciously learn to gamble on rewarding options, but can they do so when it comes to their own mental states? Here, the authors show that participants can learn to use unconscious representations in their own brains to earn rewards, and that metacognition correlates with their learning processes.

    • Aurelio Cortese
    • , Hakwan Lau
    •  & Mitsuo Kawato
  • Article
    | Open Access

    Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.

    • Guillaume Bellec
    • , Franz Scherr
    •  & Wolfgang Maass
  • Article
    | Open Access

    Habitat complexity influences the sensory ecology of predator-prey interactions. Here, the authors show that habitat complexity also affects the use of different decision-making paradigms, namely habit- and plan-based action selection. Simulations across habitat types show that only savanna-like terrestrial habitats favor planning during visually-guided predator evasion, while aquatic and simple terrestrial habitats do not.

    • Ugurcan Mugan
    •  & Malcolm A. MacIver
  • Article
    | Open Access

    The cognitive computational mechanisms underlying the antidepressant treatment response of SSRIs is not well understood. Here the authors show that SSRI treatment in healthy subjects for a week manifests as an amplification of the perception of positive outcomes when learning occurs in a positive mood setting.

    • Jochen Michely
    • , Eran Eldar
    •  & Raymond J. Dolan
  • Article
    | Open Access

    The authors use a combination of perceptual decision making in rats and computational modeling to explore the interplay of priors and sensory cues. They find that rats can learn to either alternate or repeat their actions based on reward likelihood and the influence of bias on their actions disappears after making an error.

    • Ainhoa Hermoso-Mendizabal
    • , Alexandre Hyafil
    •  & Jaime de la Rocha
  • Article
    | Open Access

    Neural activity space or manifold that represents object information changes across the layers of a deep neural network. Here the authors present a theoretical account of the relationship between the geometry of the manifolds and the classification capacity of the neural networks.

    • Uri Cohen
    • , SueYeon Chung
    •  & Haim Sompolinsky
  • Article
    | Open Access

    Dopamine neurons are proposed to signal the reward prediction error in model-free reinforcement learning algorithms. Here, the authors show that when given during an associative learning task, optogenetic activation of dopamine neurons causes associative, rather than value, learning.

    • Melissa J. Sharpe
    • , Hannah M. Batchelor
    •  & Geoffrey Schoenbaum
  • Article
    | Open Access

    Pavlovian and instrumentally driven actions often conflict when determining the best outcome. Here, the authors present an arbitration theory supported by human behavioral data where Pavlovian predictors drive action selection in an uncontrollable environment, while more flexible instrumental prediction dominates under conditions of high controllability.

    • Hayley M. Dorfman
    •  & Samuel J. Gershman
  • Article
    | Open Access

    People are able to mentally time travel to distant memories and reflect on the consequences of those past events. Here, the authors show how a mechanism that connects learning from delayed rewards with memory retrieval can enable AI agents to discover links between past events to help decide better courses of action in the future.

    • Chia-Chun Hung
    • , Timothy Lillicrap
    •  & Greg Wayne
  • Article
    | Open Access

    Rewards can improve stimulus processing in early sensory areas but the underlying neural circuit mechanisms are unknown. Here, the authors build a computational model of layer 2/3 primary visual cortex and suggest that plastic inhibitory circuits change first and then increase excitatory representations beyond the presence of rewards.

    • Katharina Anna Wilmes
    •  & Claudia Clopath
  • Article
    | Open Access

    Is there an optimum difficulty level for training? In this paper, the authors show that for the widely-used class of stochastic gradient-descent based learning algorithms, learning is fastest when the accuracy during training is 85%.

    • Robert C. Wilson
    • , Amitai Shenhav
    •  & Jonathan D. Cohen
  • Article
    | Open Access

    How are stable memories maintained in the brain despite significant ongoing fluctuations in synaptic strengths? Here, the authors show that a model consistent with fluctuations, homeostasis and biologically plausible learning rules, naturally leads to memories implemented as dynamic attractors.

    • Lee Susman
    • , Naama Brenner
    •  & Omri Barak
  • Article
    | Open Access

    Recent experimental work has revealed non-linear dendritic integration in interneurons. Here, the authors show, through detailed biophysical modeling, that fast spiking interneurons are better described with a 2-stage artificial neural network model calling into question the use of point neuron models.

    • Alexandra Tzilivaki
    • , George Kastellakis
    •  & Panayiota Poirazi